27 resultados para image-based dietary records
Resumo:
We propose a 3D-2D image registration method that relates image features of 2D projection images to the transformation parameters of the 3D image by nonlinear regression. The method is compared with a conventional registration method based on iterative optimization. For evaluation, simulated X-ray images (DRRs) were generated from coronary artery tree models derived from 3D CTA scans. Registration of nine vessel trees was performed, and the alignment quality was measured by the mean target registration error (mTRE). The regression approach was shown to be slightly less accurate, but much more robust than the method based on an iterative optimization approach.
Resumo:
Purpose - This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose (E) for pelvis using automatic exposure control (AEC) and non-AEC in a Computed Radiography (CR) system. Methods and materials - To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60–120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the two alternative forced choice (2AFC) visual grading software. PCXMC software was used to estimate E. Results - A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p > 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. E results show a statistically significant decrease (p = 0.000) on the 75th quartile from 0.37 mSv at 60 kVp to 0.13 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion - Using the 10 kVp rule, no significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant E reduction is observed.
Resumo:
Purpose - To develop and validate a psychometric scale for assessing image quality perception for chest X-ray images. Methods - Bandura's theory was used to guide scale development. A review of the literature was undertaken to identify items/factors which could be used to evaluate image quality using a perceptual approach. A draft scale was then created (22 items) and presented to a focus group (student and qualified radiographers). Within the focus group the draft scale was discussed and modified. A series of seven postero-anterior chest images were generated using a phantom with a range of image qualities. Image quality perception was confirmed for the seven images using signal-to-noise ratio (SNR 17.2–36.5). Participants (student and qualified radiographers and radiology trainees) were then invited to independently score each of the seven images using the draft image quality perception scale. Cronbach alpha was used to test interval reliability. Results - Fifty three participants used the scale to grade image quality perception on each of the seven images. Aggregated mean scale score increased with increasing SNR from 42.1 to 87.7 (r = 0.98, P < 0.001). For each of the 22 individual scale items there was clear differentiation of low, mid and high quality images. A Cronbach alpha coefficient of >0.7 was obtained across each of the seven images. Conclusion - This study represents the first development of a chest image quality perception scale based on Bandura's theory. There was excellent correlation between the image quality perception scores derived using the scale and the SNR. Further research will involve a more detailed item and factor analysis.
Resumo:
Radiotherapy (RT) is one of the most important approaches in the treatment of cancer and its performance can be improved in three different ways: through the optimization of the dose distribution, by the use of different irradiation techniques or through the study of radiobiological initiatives. The first is purely physical because is related to the physical dose distributiuon. The others are purely radiobiological because they increase the differential effect between the tumour and the health tissues. The Treatment Planning Systems (TPS) are used in RT to create dose distributions with the purpose to maximize the tumoral control and minimize the complications in the healthy tissues. The inverse planning uses dose optimization techniques that satisfy the criteria specified by the user, regarding the target and the organs at risk (OAR’s). The dose optimization is possible through the analysis of dose-volume histograms (DVH) and with the use of computed tomography, magnetic resonance and other digital image techniques.
Resumo:
In this paper a new PCA-based positioning sensor and localization system for mobile robots to operate in unstructured environments (e. g. industry, services, domestic ...) is proposed and experimentally validated. The inexpensive positioning system resorts to principal component analysis (PCA) of images acquired by a video camera installed onboard, looking upwards to the ceiling. This solution has the advantage of avoiding the need of selecting and extracting features. The principal components of the acquired images are compared with previously registered images, stored in a reduced onboard image database, and the position measured is fused with odometry data. The optimal estimates of position and slippage are provided by Kalman filters, with global stable error dynamics. The experimental validation reported in this work focuses on the results of a set of experiments carried out in a real environment, where the robot travels along a lawn-mower trajectory. A small position error estimate with bounded co-variance was always observed, for arbitrarily long experiments, and slippage was estimated accurately in real time.
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Purpose: This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose for pelvis using automatic exposure control (AEC) and non-AEC in a computed radiography (CR) system. Methods and Materials: To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60-120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the 2 AFC visual grading software. PCXMC software was used to estimate the effective dose. Results: A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p> 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. Effective dose results show a statistical significant decrease (p=0.000) on the 75th quartile from 0.3 mSv at 60 kVp to 0.1 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion: No significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant effective dose reduction is observed.
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
This paper proposes an FPGA-based architecture for onboard hyperspectral unmixing. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral datasets. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.