974 resultados para spatial resolution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis submitted to the Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Information Management – Geographic Information Systems

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Drug development represents a highly complex, inefficient and costly process. Over the past decade, the widespread use of nuclear imaging, owing to its functional and molecular nature, has proven to be a determinant in improving the efficiency in selecting the candidate drugs that should either be abandoned or moved forward into clinical trials. This helps not only with the development of safer and effective drugs but also with the shortening of time-to-market. The modern concept and future trends concerning molecular imaging will assumedly be hybrid or multimodality imaging, including combinations between high sensitivity and functional (molecular) modalities with high spatial resolution and morphological techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RESUMO: Apesar de toda a evolução farmacológica e de meios complementares de diagnóstico possível nos últimos anos, o enfarte agudo do miocárdio e a morte súbita continuam a ser a primeira manifestação da aterosclerose coronária para muitos doentes, que estavam previamente assintomáticos. Os exames complementares de diagnóstico tradicionalmente usados para avaliar a presença de doença coronária, baseiam‐se na documentação de isquémia do miocárdio e por este motivo a sua positividade depende da presença de lesões coronárias obstrutivas. As lesões coronárias não obstrutivas estão também frequentemente implicadas no desenvolvimento de eventos coronários. Apesar de o risco absoluto de instabilização por placa ser superior para as lesões mais volumosas e obstrutivas, estas são menos prevalentes do que as placas não obstrutivas e assim, por questões probabilísticas, os eventos coronários resultam com frequência da rotura ou erosão destas últimas. Estudos recentes de imagiologia intracoronária avançada forneceram evidência de que apesar de ser possível identificar algumas características de vulnerabilidade em placas associadas ao desenvolvimento subsequente de eventos coronários, a sua sensibilidade e especificidade é muito baixa para aplicação clínica. Mais do que o risco associado a uma placa em particular, para o doente poderá ser mais importante o risco global da sua árvore coronária reflexo da soma das probabilidade de todas as suas lesões, sendo que quanto maior for a carga aterosclerótica maior será o seu risco. A angio TC cardíaca é a mais recente técnica de imagem não invasiva para o estudo da doença coronária e surgiu nos últimos anos fruto de importantes avanços na tecnologia de TC multidetectores. Estes avanços, permitiram uma progressiva melhoria da resolução espacial e temporal, contribuindo para a melhoria da qualidade dos exames, bem como uma significativa redução da dose de radiação. A par desta evolução tecnológica, foi aumentando a experiência e gerada mais evidência científica, tornando a angio TC cardíaca cada vez mais robusta na avaliação da doença coronária e aumentando a sua aplicabilidade clínica. Mais recentemente apareceram vários trabalhos que validaram o seu valor prognóstico, assinalando a sua chegada à idade adulta. Para além de permitir excluir a presença de doença coronária e de identificar a presença de estenoses significativas, a angio TC cardíaca permite identificar a presença de lesões coronárias não obstrutivas, característica impar desta técnica como modalidade de imagem não invasiva. Ao permitir identificar a totalidade das lesões ateroscleróticas (obstrutivas e não obstrutivas), a 18 angio TC cardíaca poderá fornecer uma quantificação da carga aterosclerótica coronária total, podendo essa identificação ser útil na estratificação dos indivíduos em risco de eventos coronários. Neste trabalho foi possível identificar preditores demográficos e clínicos de uma elevada carga aterosclerótica coronária documentada pela angioTC cardíaca, embora o seu poder discriminativo tenha sido relativamente modesto, mesmo quando agrupados em scores clínicos. Entre os vários scores, o desempenho foi um pouco melhor para o score de risco cardiovascular Heartscore. Estas limitações espelham a dificuldade de prever apenas com base em variáveis clínicas, mesmo quando agrupadas em scores, a presença e extensão da doença coronária. Um dos factores de risco clássicos, a obesidade, parece ter uma relação paradoxal com a carga aterosclerótica, o que pode justificar algumas limitações da estimativa com base em scores clínicos. A diabetes mellitus, por outro lado, foi um dos preditores clínicos mais importantes, funcionando como modelo de doença coronária mais avançada, útil para avaliar o desempenho dos diferentes índices de carga aterosclerótica. Dada a elevada prevalência de placas ateroscleróticas identificáveis por angio TC na árvore coronária, torna-‐se importante desenvolver ferramentas que permitam quantificar a carga aterosclerótica e assim identificar os indivíduos que poderão eventualmente beneficiar de medidas de prevenção mais intensivas. Com este objectivo, foi desenvolvido um índice de carga aterosclerótica que reúne a informação global acerca da localização, do grau de estenose e do tipo de placa, obtida pela angio TC cardíaca, o CT--‐LeSc. Este score poderá vir a ser uma ferramenta útil para quantificação da carga aterosclerótica coronária, sendo de esperar que possa traduzir a informação prognóstica da angio TC cardíaca. Por fim, o conceito de árvore coronária vulnerável poderá ser mais importante do que o da placa vulnerável e a sua identificação pela angio TC cardíaca poderá ser importante numa estratégia de prevenção mais avançada. Esta poderá permitir personalizar as medidas de prevenção primária, doseando melhor a sua intensidade em função da carga aterosclerótica, podendo esta vir a constituir uma das mais importantes indicações da angio TC cardíaca no futuro.---------------- ABSTRACT Despite the significant advances made possible in recent years in the field of pharmacology and diagnostic tests, acute yocardial infarction and sudden cardiac death remain the first manifestation of coronary atherosclerosis in a significant proportion of patients, as many were previously asymptomatic. Traditionally, the diagnostic exams employed for the evaluation of possible coronary artery disease are based on the documentation of myocardial ischemia and, in this way, they are linked to the presence of obstructive coronary stenosis. Nonobstructive coronary lesions are also frequently involved in the development of coronary events. Although the absolute risk of becoming unstable per plaque is higher for more obstructive and higher burden plaques, these are much less frequent than nonobstructive lesions and therefore, in terms of probability for the patient, coronary events are often the result of rupture or erosion of the latter ones. Recent advanced intracoronary imaging studies provided evidence that although it is possible to identify some features of vulnerability in plaques associated with subsequente development of coronary events, the sensitivity and sensibility are very limited for clinical application. More important than the individual risk associated with a certain plaque, for the patient it might be more important the global risk of the total coronary tree, as reflected by the sum of the diferent probabilities of all the lesions, since the higher the coronary Atherosclerotic burden, the higher the risk for the patient. Cardiac CT or Coronary CT angiography is still a young modality. It is the most recente noninvasive imaging modality in the study of coronary artery disease and its development was possible due to important advances in multidetector CT technology. These allowed significant improvements in temporal and spatial resolution, leading to better image quality and also some impressive reductions in radiation dose. At the same time, the increasing experience with this technique lead to a growing body of scientific evidence, making cardiac CT a robust imaging tool for the evaluation of coronary artery disease and increased its clinical indications. More recently, several publications documented its prognostic value, marking the transition of cardiac CT to adulthood. Besides being able to exclude the presence of coronary artery disease and of obstructive lesions, Cardiac CT allows also the identification of nonobstructive lesions, making this a unique tool in the field of noninvasive imaging modalities. By evaluating both obstructive and nonobstructive lesions, cardiac CT can provide for the quantification of total coronary atherosclerotic burden, and this can be useful to stratify the risk of future coronary events. In the present work, it was possible to identify significant demographic and clinical predictors of a high coronary atherosclerotic burden as assessed by cardiac CT, but with modest odds ratios, even when the individual variables were gathered in clinical scores. Among these diferent clinical scores, the performance was better for the Heartscore, a cardiovascular risk score. This modest performance underline the limitations on predicting the presence and severity of coronary disease based only on clinical variables, even when optimized together in risk scores, One of the classical risk factors, obesity, had in fact a paradoxical relation with coronary atherosclerotic burden and might explain some of the limitations of the clinical models. On the opposite, diabetes mellitus was one of the strongest clinical predictors, and was considered to be a model of more advanced coronary disease, useful to evaluate the performance of diferent plaque burden scores. In face of the high prevalence of plaques that can be identified in the coronary tree of patients undergoing cardiac CT, it is of utmost importance to develop tools to quantify the total coronary atherosclerotic burden providing the identification of patients that could eventually benefit from more intensive preventive measures. This was the rational for the development of a coronary atherosclerotic burden score, reflecting the comprehensive information on localization, degree of stenosis and plaque composition provided by cardiac CT – the CT-LeSc. This score may become a useful tool to quantify total coronary atherosclerotic burden and is expected to convey the strong prognostic information of cardiac CT. Lastly, the concept of vulnerable coronary tree might become more important than the concept of the vulnerable plaque and his assessment by cardiac CT Might become important in a more advance primary prevention strategy. This Could lead to a more custom-made primary prevention, tailoring the intensity of preventive measures to the atherosclerotic burden and this might become one of the most important indications of cardiac CT In the near future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the recent advances in technology and miniaturization of devices such as GPS or IMU, Unmanned Aerial Vehicles became a feasible platform for a Remote Sensing applications. The use of UAVs compared to the conventional aerial platforms provides a set of advantages such as higher spatial resolution of the derived products. UAV - based imagery obtained by a user grade cameras introduces a set of problems which have to be solved, e. g. rotational or angular differences or unknown or insufficiently precise IO and EO camera parameters. In this work, UAV - based imagery of RGB and CIR type was processed using two different workflows based on PhotoScan and VisualSfM software solutions resulting in the DSM and orthophoto products. Feature detection and matching parameters influence on the result quality as well as a processing time was examined and the optimal parameter setup was presented. Products of the both workflows were compared in terms of a quality and a spatial accuracy. Both workflows were compared by presenting the processing times and quality of the results. Finally, the obtained products were used in order to demonstrate vegetation classification. Contribution of the IHS transformations was examined with respect to the classification accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RESUMO: Este trabalho tentou contribuir para a caracterização da fisiopatologia da microcirculação coronária em diferentes formas de patologia com o auxílio da ecocardiografia transtorácica. Com a aplicação da ecocardiografia Doppler transtorácica foi efectuado o estudo da reserva coronária da artéria descendente anterior e com a ecocardiografia de contraste do miocárdio foram analisados parâmetros de perfusão do miocárdio como a velocidade da microcirculação coronária, o volume de sangue miocárdico e a reserva de fluxo miocárdico. Estas técnicas foram utilizadas em diferentes situações fisiopatológicas com particular interesse na hipertrofia ventricular esquerda de diferentes etiologias como a hipertensão arterial, estenose aórtica e cardiomiopatia hipertrófica. Também na diabetes mellitus tipo 2 e na doença coronária aterosclerótica, estudámos as alterações da microcirculação coronária. Com a mesma técnica de ecocardiografia de contraste do miocárdio foi analisada a perfusão do miocárdio num modelo experimental animal sujeito a uma dieta aterogénica. Além das conclusões específicas em relação a cada um dos trabalhos efectuados há a referir como conclusões gerais a sua fácil aplicabilidade e exequibilidade em âmbito clínico, a sua reprodutibilidade e precisão. Quando comparadas com técnicas consideradas de referência mostraram resultados com significativa correlação estatística. Em todos os doentes e nos grupos controle foi possível comprovar e quantificar o gradiente de perfusão transmural em repouso e durante a acção de stress vasodilatador, relevando a importância da perfusão sub-endocárdica na função do ventrículo esquerdo. O estudo da microcirculação coronária no grupo de doentes com hipertrofia ventricular esquerda revelou que no grupo com hipertensão arterial existe disfunção da microcirculação coronária ainda antes de se observar aumento de massa do ventrículo esquerdo, e que esta disfunção é diferente em função da geometria ventricular. Nos doentes com estenose aórtica foi demonstrado que além da disfunção da microcirculação coronária, explicada pelo fenómeno de hipertrofia, existe outro componente extrínseco que depois de corrigido através de cirurgia de substituição valvular, conduziu a uma parcial normalização dos valores de reserva coronária. Na cardiomiopatia hipertrófica observou-se uma grande heterogeneidade de perfusão transmural e foi documentado, em imagens de ecocardiografia de contraste do miocárdio e após análise paramétrica, a ausência de perfusão do miocárdio na região sub-endocárdica durante o stress vasodilatador de reserva coronária diminuídos em fases precoces de evolução da doença. Foi demonstrado que a reserva coronária na DM2 em fases mais avançadas estava significativamente diminuída. Descrevemos também em doentes com DM2 e sem doença coronária angiográfica a existência de disfunção da microcirculação coronária. Durante o stress vasodilatador, observámos e documentámos neste grupo de doentes, a existência de defeitos de perfusão transitórios ou de diminuição da velocidade da microcirculação coronária. No grupo de doentes com doença coronária confirmámos o interesse da avaliação da reserva coronária após intervenção percutânea na definição de prognóstico pós EAM, em termos de recuperação funcional do ventrículo esquerdo. Em doentes com BCRE e de difícil estratificação de risco, foi possível calcular o valor de reserva coronária e estratificar o risco de doença coronária. Num modelo experimental animal demonstrámos a exequibilidade da técnica de ECM, e verificámos que nessas condições experimentais, uma sobrecarga aterogénica na dieta, ao fim de 6 semanas, comprometia severamente a reserva coronária. Estes resultados foram parcialmente reversíveis quando à dieta foi adicionada uma estatina. Estas técnicas pela sua não invasibilidade, fácil acesso, repetibilidade e inocuidade perspectivam-se de grande utilidade na caracterização de doentes com disfunção da microcirculação coronária, nas diferentes áreas de diagnóstico, terapêutica e prevenção. A possibilidade de adaptar a técnica em modelos experimentais animais também nos parece poder vir a ter grande utilidade em investigação.----------------ABSTRACT: This work is intended to be a contribution to the study of coronary microcirculation applying new echocardiographic techniques as transthoracic Doppler echocardiography of coronary arteries and myocardial contrast echocardiography. Coronary flow reserve may be assessed by transthoracic Doppler echocardiography, and important functional microcirculation parameters as microcirculation flow velocity, myocardial blood volume and myocardial flow reserve may be evaluated through myocardial contrast echocardiography. Microcirculation was analysed in different pathophysiological settings. We addressed situations with increased left ventricular mass as systemic arterial hypertension, aortic stenosis and hypertrophic cardiomyopathy. Also coronary microcirculation was studied in type 2 Diabetes and in different clinical forms of atherosclerotic coronary artery disease. Specific and detailed conclusions were withdrawn from each experimental work. In the overall it was concluded that these two techniques were important tools to easily assess specific pathophysiological information about coronary microcirculation at bed side which would be difficult to get through other techniques. When compared with gold standard techniques, similar sensibility and specificity was found. Because of their better temporal and spatial resolution it was possible to analyse the importance of transmural perfusion gradients, both in basal and during vasodilatation, and their relation to ischemia, and mechanical wall kinetics, as wall thickening and motion. Coronary microcirculation dysfunction was found in systemic arterial hypertension early evolution stages, also related to different left ventricular geometric patterns. Different etiopathogenical explanations for aortic stenosis coronary microcirculation dysfunction were analysed and compared after aortic valve replacement. Transmural myocardial perfusion heterogeneity pattern was observed in hypertrophic cardiomyopathy which was aggravated during adenosine challenge. Coronary microcirculation dysfunction was diagnosed in type 2 diabetes both with coronary artery disease and with normal angiographic coronary arteries. Dynamic transitory subendocardial perfusion defects with adenosine vasodilatation were visualized in these patients.In patients with left branch block, transthoracic Doppler echocardiography was able to suggest a coronary reserve cut-off value for risk stratification. Also it was possible with this technique to calculate coronary flow reserve and predict restenosis after PTCA Again, in an experimental animal model, applying myocardial contrast echocardiography technique it was possible to study the consequences of an atherogenic diet and statins action on the coronary microcirculation function. Because these techniques are easily performed at bed side, are harmless, use no ionizing radiation and because of their repeatability, reproducibility and accuracythey are promissory tools to assess coronary microcirculation. Both in clinic and research areas these techniques will probably have a role in clinical diagnosis, prevention and therapeutically decision.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de mestrado em Geologia (área de especialização em Valorização de Recursos Geológicos)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La idea principal del proyecto abarca el estudio de parámetros y fenómenos físicos. Los avances logrados se aplicarán al desarrollo de software y metodologías para cuantificación de materiales mediante microanálisis con sonda de electrones y microscopía electrónica de barrido. El microanálisis no es una técnica absoluta, sino que requiere de estándares de referencia, para obviar el uso de ciertos parámetros geométricos y atómicos difíciles de conocer con una precisión adecuada. Para contar con un método sin estándares debe abordarse la determinación de parámetros atómicos e instrumentales, que es uno de los aspectos que se desea encarar en este proyecto. Por otro lado, también se pretende incluir los parámetros estudiados en un software de cuantificación desarrollado por integrantes del proyecto. Otro de los propósitos del plan de trabajo es estudiar la potencialidad de la resolución espacial de una microsonda de electrones con el fin de desarrollar una metodología para caracterizar interfases, bordes de granos e inclusiones, con resolución submicrométrica, ya que los métodos tradicionales de cuantificación se restringen al caso de muestras planas y homogéneas dentro del volumen de interacción, pero la caracterización de inhomogeneidades a nivel micrométrico no ha sido desarrollada todavía, salvo algunas excepciones. The main idea of this project involves the study of physical parameters and phenomena. The concretion of the different goals will permit the elaboration of softeare and methodologies for materials characterization by means of electron probe microanalysis and scanning microscopy. Electron probe microanalysis is not an absolute technique, but requires reference standards in order not to involve certain geometrical and atomic parameters for which high uncertainties cannot be avoided. In order to have standardless method, the determination of atomic and instrumental parameters must be accomplished, as will be faced through this project. Complementary, the parameters studied will be included in a quantification software developed in our research group of FaMAF. Another objective of this activity plan is to study the spatial resolution potentiality of a focalized electron beam, with the aim of characterizing interphases, grain boundaries and inclusions with submicron sensitivity, since the traditional quantification procedures are restricted to flat homogeneous samples, whereas the characterization of inhomogeneities has not been developed yet.