986 resultados para Image resolution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular - Área de especialização: Intervenção cardiovascular

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os métodos utilizados pela Medicina moderna no âmbito da Imagem Molecular e na sua capacidade de diagnosticar a partir da “Função do Orgão” em vez da “Morfologia do Orgão”, vieram trazer á componente fundamental desta modalidade da Imagiologia Médica – A Medicina Nuclear – uma importância acrescida, que se tem traduzido num aumento significativo no recurso á sua utilização nas diferentes formas das suas aplicações clínicas. Para além dos aspectos meramente clínicos, que só por si seriam suficientes para ocupar várias dissertações como a presente; a própria natureza desta técnica de imagem, com a sua inerente baixa resolução e tempos longos de aquisição, vieram trazer preocupações acrescidas quanto ás questões relacionadas com a produtividade (nº de estudos a realizar por unidade de tempo); com a qualidade (aumento da resolução da imagem obtida) e, com os níveis de actividade radioactiva injectada nos pacientes (dose de radiação efectiva sobre as populações). Conhecidas que são então as limitações tecnológicas associadas ao desenho dos equipamentos destinados á aquisição de dados em Medicina Nuclear, que apesar dos avanços introduzidos, mantêm mais ou menos inalteráveis os conceitos base de funcionamento de uma Câmara Gama, imaginou-se a alteração significativa dos parâmetros de aquisição (tempo, resolução, actividade), actuando não ao nível das condições técnico-mecânicas dessa aquisição, mas essencialmente ao nível do pós-processamento dos dados adquiridos segundo os métodos tradicionais e que ainda constituem o estado da arte desta modalidade. Este trabalho tem então como objectivo explicar por um lado, com algum pormenor, as bases tecnológicas que desde sempre têm suportado o funcionamento dos sistemas destinados á realização de exames de Medicina Nuclear, mas sobretudo, apresentar as diferenças com os inovadores métodos, que aplicando essencialmente conhecimento (software), permitiram responder ás questões acima levantadas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this study was to investigate the impact of navigator timing on image quality in navigator-gated and real-time motion-corrected, free-breathing, three-dimensional (3D) coronary MR angiography (MRA) with submillimeter spatial image resolution. Both phantom and in vivo investigations were performed. 3D coronary MRA with real-time navigator technology was applied using variable navigator time delays (time delay between the navigator and imaging sequences) and varying spatial resolutions. Quantitative objective and subjective image quality parameters were assessed. For high-resolution imaging, reduced image quality was found as a function of increasing navigator time delay. Lower spatial resolution coronary MRA showed only minor sensitivity to navigator timing. These findings were consistent among volunteers and phantom experiments. In conclusion, for submillimeter navigator-gated and real-time motion-corrected 3D coronary MRA, shortening the time delay between the navigator and the imaging portion of the sequence becomes increasingly important for improved spatial resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Three-dimensional (3D) navigator-gated and prospectively corrected free-breathing coronary magnetic resonance angiography (MRA) allows for submillimeter image resolution but suffers from poor contrast between coronary blood and myocardium. Data collected over >100 ms/heart beat are also susceptible to bulk cardiac and respiratory motion. To address these problems, we examined the effect of a T2 preparation prepulse (T2prep) for myocardial suppression and a shortened acquisition window on coronary definition. METHODS AND RESULTS: Eight healthy adult subjects and 5 patients with confirmed coronary artery disease (CAD) underwent free-breathing 3D MRA with and without T2prep and with 120- and 60-ms data-acquisition windows. The T2prep resulted in a 123% (P<0. 001) increase in contrast-to-noise ratio (CNR). Coronary edge definition was improved by 33% (P<0.001). Acquisition window shortening from 120 to 60 ms resulted in better vessel definition (11%; P<0.001). Among patients with CAD, there was a good correspondence with disease. CONCLUSIONS: Free-breathing, T2prep, 3D coronary MRA with a shorter acquisition window resulted in improved CNR and better coronary artery definition, allowing the assessment of coronary disease. This approach offers the potential for free-breathing, noninvasive assessment of the major coronary arteries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-invasive visualization of the coronary arteries represents a major challenge in modern cardiology, but this goal may be achieved in the near future by MR angiography. Possible applications are non-invasive diagnosis of coronary artery disease, and follow-up examinations for therapy control after PTCA, in order to detect restenosis at an early stage. A multiple slice technique (2 mm slice thickness, with a spatial resolution of 1 x 1 mm, Philips Gyroscan ACS-II, 1.5 Tesla) was used. Ten volunteers were imaged and 10 patients with coronary artery disease were examined before and after PTCA. MR measurements were validated by quantitative coronary angiography. The diameters of the proximal coronary arteries as measured by both methods were compared, and a good correlation was found (r = 0.76). Thus, it is concluded that non-invasive visualization of the coronary arteries is possible before and after PTCA and allows to determine potential restenoses. However, patient cooperation is essential for good image quality. Moreover, limited spatial image resolution and breathing artifacts restrict MR coronary angiography today to be used as a routine diagnostic tool for the diagnosis of coronary artery disease.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to detect faces in images is of critical ecological significance. It is a pre-requisite for other important face perception tasks such as person identification, gender classification and affect analysis. Here we address the question of how the visual system classifies images into face and non-face patterns. We focus on face detection in impoverished images, which allow us to explore information thresholds required for different levels of performance. Our experimental results provide lower bounds on image resolution needed for reliable discrimination between face and non-face patterns and help characterize the nature of facial representations used by the visual system under degraded viewing conditions. Specifically, they enable an evaluation of the contribution of luminance contrast, image orientation and local context on face-detection performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a recent investigation, Landsat TM and ETM+ data were used to simulate different resolutions of remotely-sensed images (from 30 to 1100 m) and to analyze the effect of resolution on a range of landscape metrics associated with spatial patterns of forest fragmentation in Chapare, Bolivia since the mid-1980s. Whereas most metrics were found to be highly dependent on pixel size, several fractal metrics (DLFD, MPFD, and AWMPFD) were apparently independent of image resolution, in contradiction with a sizeable body of literature indicating that fractal dimensions of natural objects depend strongly on image characteristics. The present re-analysis of the Chapare images, using two alternative algorithms routinely used for the evaluation of fractal dimensions, shows that the values of the box-counting and information fractal dimensions are systematically larger, sometimes by as much as 85%, than the "fractal" indices DLFD, MPFD, and AWMFD for the same images. In addition, the geometrical fractal features of the forest and non-forest patches in the Chapare region strongly depend on the resolution of images used in the analysis. The largest dependency on resolution occurs for the box-counting fractal dimension in the case of the non-forest patches in 1993, where the difference between the 30 and I 100 m-resolution images corresponds to 24% of the full theoretical range (1.0 to 2.0) of the mass fractal dimension. The observation that the indices DLFD, MPFD, and AWMPFD, unlike the classical fractal dimensions, appear relatively unaffected by resolution in the case of the Chapare images seems due essentially to the fact that these indices are based on a heuristic, "non-geometric" approach to fractals. Because of their lack of a foundation in fractal geometry, nothing guarantees that these indices will be resolution-independent in general. (C) 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.