16 resultados para spatial resolution
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Amorphous glass/ZnO-Al/p(a-Si:H)/i(a-Si:H)/n(a-Si1-xCx:H)/Al imagers with different n-layer resistivities were produced by plasma enhanced chemical vapour deposition technique (PE-CVD). An image is projected onto the sensing element and leads to spatially confined depletion regions that can be readout by scanning the photodiode with a low-power modulated laser beam. The essence of the scheme is the analog readout, and the absence of semiconductor arrays or electrode potential manipulations to transfer the information coming from the transducer. The influence of the intensity of the optical image projected onto the sensor surface is correlated with the sensor output characteristics (sensitivity, linearity blooming, resolution and signal-to-noise ratio) are analysed for different material compositions (0.5 < x < 1). The results show that the responsivity and the spatial resolution are limited by the conductivity of the doped layers. An enhancement of one order of magnitude in the image intensity signal and on the spatial resolution are achieved at 0.2 mW cm(-2) light flux by decreasing the n-layer conductivity by the same amount. A physical model supported by electrical simulation gives insight into the image-sensing technique used.
Resumo:
In this review paper different designs based on stacked p-i'-n-p-i-n heterojunctions are presented and compared with the single p-i-n sensing structures. The imagers utilise self-field induced depletion layers for light detection and a modulated laser beam for sequential readout. The effect of the sensing element structure, cell configurations (single or tandem), and light source properties (intensity and wavelength) are correlated with the sensor output characteristics (light-to-dark sensivity, spatial resolution, linearity and S/N ratio). The readout frequency is optimized showing that scans speeds up to 104 lines per second can be achieved without degradation in the resolution. Multilayered p-i'-n-p-i-n heterostructures can also be used as wavelength-division multiplexing /demultiplexing devices in the visible range. Here the sensor element faces the modulated light from different input colour channels, each one with a specific wavelength and bit rate. By reading out the photocurrent at appropriated applied bias, the information is multiplexed or demultiplexed and can be transmitted or recovered again. Electrical models are present to support the sensing methodologies.
Resumo:
Mestrado de Radiações aplicadas às Tecnologias da Saúde. Área de especialização: Imagem Digital com Radiação X.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Área de especialização: Ressonância Magnética
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde - Área de especialização: Terapia com Radiações.
Resumo:
Objective - To describe and validate the simulation of the basic features of GE Millennium MG gamma camera using the GATE Monte Carlo platform. Material and methods - Crystal size and thickness, parallel-hole collimation and a realistic energy acquisition window were simulated in the GATE platform. GATE results were compared to experimental data in the following imaging conditions: a point source of 99mTc at different positions during static imaging and tomographic acquisitions using two different energy windows. The accuracy between the events expected and detected by simulation was obtained with the Mann–Whitney–Wilcoxon test. Comparisons were made regarding the measurement of sensitivity and spatial resolution, static and tomographic. Simulated and experimental spatial resolutions for tomographic data were compared with the Kruskal–Wallis test to assess simulation accuracy for this parameter. Results - There was good agreement between simulated and experimental data. The number of decays expected when compared with the number of decays registered, showed small deviation (≤0.007%). The sensitivity comparisons between static acquisitions for different distances from source to collimator (1, 5, 10, 20, 30cm) with energy windows of 126–154 keV and 130–158 keV showed differences of 4.4%, 5.5%, 4.2%, 5.5%, 4.5% and 5.4%, 6.3%, 6.3%, 5.8%, 5.3%, respectively. For the tomographic acquisitions, the mean differences were 7.5% and 9.8% for the energy window 126–154 keV and 130–158 keV. Comparison of simulated and experimental spatial resolutions for tomographic data showed no statistically significant differences with 95% confidence interval. Conclusions - Adequate simulation of the system basic features using GATE Monte Carlo simulation platform was achieved and validated.
Resumo:
Objectives - Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Methods - Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Results - Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. Conclusions - The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources.
Resumo:
Susceptibility Weighted Image (SWI) is a Magnetic Resonance Imaging (MRI) technique that combines high spatial resolution and sensitivity to provide magnetic susceptibility differences between tissues. It is extremely sensitive to venous blood due to its iron content of deoxyhemoglobin. The aim of this study was to evaluate, through the SWI technique, the differences in cerebral venous vasculature according to the variation of blood pressure values. 20 subjects divided in two groups (10 hypertensive and 10 normotensive patients) underwent a MRI system with a Siemens® scanner model Avanto of 1.5T using a synergy head coil (4 channels). The obtained sequences were T1w, T2w-FLAIR, T2* and SWI. The value of Contrast-to-Noise Ratio (CNR) was assessed in MinIP (Minimum Intensity Projection) and Magnitude images, through drawing free hand ROIs in venous structures: Superior Sagittal Sinus (SSS) Internal Cerebral Vein (ICV) and Sinus Confluence (SC). The obtained values were presented in descriptive statistics-quartiles and extremes diagrams. The results were compared between groups. CNR shown higher values for normotensive group in MinIP (108.89 ± 6.907) to ICV; (238.73 ± 18.556) to SC and (239.384 ± 52.303) to SSS. These values are bigger than images from Hypertensive group about 46 a.u. in average. Comparing the results of Magnitude and MinIP images, there were obtained lower CNR values for the hypertensive group. There were differences in the CNR values between both groups, being these values more expressive in the large vessels-SSS and SC. The SWI is a potential technique to evaluate and characterize the blood pressure variation in the studied vessels adding a physiological perspective to MRI and giving a new approach to the radiological vascular studies.
Resumo:
X-ray fluoroscopy is essential in both diagnosis and medical intervention, although it may contribute to significant radiation doses to patients that have to be optimised and justified. Therefore, it is crucial to the patient to be exposed to the lowest achievable dose without compromising the image quality. The purpose of this study was to perform an analysis of the quality control measurements, particularly dose rates, contrast and spatial resolution of Portuguese fluoroscopy equipment and also to provide a contribution to the establishment of reference levels for the equipment performance parameters. Measurements carried out between 2007 and 2013 on 143 fluoroscopy equipment distributed by 34 nationwide health units were analysed. The measurements suggest that image quality and dose rates of Portuguese equipment are congruent with other studies, and in general, they are as per the Portuguese law. However, there is still a possibility of improvements intending optimisation at a national level.
Resumo:
Introduction: The purpose of this review is to gather and analyse current research publications to evaluate Sinogram-Affirmed Iterative Reconstruction (SAFIRE). The aim of this review is to investigate whether this algorithm is capable of reducing the dose delivered during CT imaging while maintaining image quality. Recent research shows that children have a greater risk per unit dose due to increased radiosensitivity and longer life expectancies, which means it is particularly important to reduce the radiation dose received by children. Discussion: Recent publications suggest that SAFIRE is capable of reducing image noise in CT images, thereby enabling the potential to reduce dose. Some publications suggest a decrease in dose, by up to 64% compared to filtered back projection, can be accomplished without a change in image quality. However, literature suggests that using a higher SAFIRE strength may alter the image texture, creating an overly ‘smoothed’ image that lacks contrast. Some literature reports SAFIRE gives decreased low contrast detectability as well as spatial resolution. Publications tend to agree that SAFIRE strength three is optimal for an acceptable level of visual image quality, but more research is required. The importance of creating a balance between dose reduction and image quality is stressed. In this literature review most of the publications were completed using adults or phantoms, and a distinct lack of literature for paediatric patients is noted. Conclusion: It is necessary to find an optimal way to balance dose reduction and image quality. More research relating to SAFIRE and paediatric patients is required to fully investigate dose reduction potential in this population, for a range of different SAFIRE strengths.
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Anaemia has a significant impact on child development and mortality and is a severe public health problem in most countries in sub-Saharan Africa. Nutritional and infectious causes of anaemia are geographically variable and anaemia maps based on information on the major aetiologies of anaemia are important for identifying communities most in need and the relative contribution of major causes. We investigated the consistency between ecological and individual-level approaches to anaemia mapping, by building spatial anaemia models for children aged ≤15 years using different modeling approaches. We aimed to a) quantify the role of malnutrition, malaria, Schistosoma haematobium and soil-transmitted helminths (STH) for anaemia endemicity in children aged ≤15 years and b) develop a high resolution predictive risk map of anaemia for the municipality of Dande in Northern Angola. We used parasitological survey data on children aged ≤15 years to build Bayesian geostatistical models of malaria (PfPR≤15), S. haematobium, Ascaris lumbricoides and Trichuris trichiura and predict small-scale spatial variation in these infections. The predictions and their associated uncertainty were used as inputs for a model of anemia prevalence to predict small-scale spatial variation of anaemia. Stunting, PfPR≤15, and S. haematobium infections were significantly associated with anaemia risk. An estimated 12.5%, 15.6%, and 9.8%, of anaemia cases could be averted by treating malnutrition, malaria, S. haematobium, respectively. Spatial clusters of high risk of anaemia (>86%) were identified. Using an individual-level approach to anaemia mapping at a small spatial scale, we found that anaemia in children aged ≤15 years is highly heterogeneous and that malnutrition and parasitic infections are important contributors to the spatial variation in anemia risk. The results presented in this study can help inform the integration of the current provincial malaria control program with ancillary micronutrient supplementation and control of neglected tropical diseases, such as urogenital schistosomiasis and STH infection.