986 resultados para Image resolution
Resumo:
During conventional x-ray coronary angiography, multiple projections of the coronary arteries are acquired to define coronary anatomy precisely. Due to time constraints, coronary magnetic resonance angiography (MRA) usually provides only one or two views of the major coronary vessels. A coronary MRA approach that allowed for reconstruction of arbitrary isotropic orientations might therefore be desirable. The purpose of the study was to develop a three-dimensional (3D) coronary MRA technique with isotropic image resolution in a relatively short scanning time that allows for reconstruction of arbitrary views of the coronary arteries without constraints given by anisotropic voxel size. Eight healthy adult subjects were examined using a real-time navigator-gated and corrected free-breathing interleaved echoplanar (TFE-EPI) 3D-MRA sequence. Two 3D datasets were acquired for the left and right coronary systems in each subject, one with anisotropic (1.0 x 1.5 x 3.0 mm, 10 slices) and one with "near" isotropic (1.0 x 1.5 x 1.0 mm, 30 slices) image resolution. All other imaging parameters were maintained. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and the right coronary artery (RCA) were visualized. Objective assessment of coronary vessel sharpness was similar (41% +/- 5% vs. 42% +/- 5%; P = NS) between in-plane and through-plane views with "isotropic" voxel size but differed (32% +/- 7% vs. 23% +/- 4%; P < 0.001) with nonisotropic voxel size. In reconstructed views oriented in the through-plane direction, the vessel border was 86% more defined (P < 0.01) for isotropic compared with anisotropic images. A smaller (30%; P < 0.001) improvement was seen for in-plane reconstructions. Vessel diameter measurements were view independent (2.81 +/- 0.45 mm vs. 2.66 +/- 0.52 mm; P = NS) for isotropic, but differed (2.71 +/- 0.51 mm vs. 3.30 +/- 0.38 mm; P < 0.001) between anisotropic views. Average scanning time was 2:31 +/- 0:57 minutes for anisotropic and 7:11 +/- 3:02 minutes for isotropic image resolution (P < 0.001). We present a new approach for "near" isotropic 3D coronary artery imaging, which allows for reconstruction of arbitrary views of the coronary arteries. The good delineation of the coronary arteries in all views suggests that isotropic 3D coronary MRA might be a preferred technique for the assessment of coronary disease, although at the expense of prolonged scan times. Comparative studies with conventional x-ray angiography are needed to investigate the clinical utility of the isotropic strategy.
Resumo:
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.
Resumo:
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.
Resumo:
Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.
Resumo:
The aim of the present study is to determine the level of correlation between the 3-dimensional (3D) characteristics of trabecular bone microarchitecture, as evaluated using microcomputed tomography (μCT) reconstruction, and trabecular bone score (TBS), as evaluated using 2D projection images directly derived from 3D μCT reconstruction (TBSμCT). Moreover, we have evaluated the effects of image degradation (resolution and noise) and X-ray energy of projection on these correlations. Thirty human cadaveric vertebrae were acquired on a microscanner at an isotropic resolution of 93μm. The 3D microarchitecture parameters were obtained using MicroView (GE Healthcare, Wauwatosa, MI). The 2D projections of these 3D models were generated using the Beer-Lambert law at different X-ray energies. Degradation of image resolution was simulated (from 93 to 1488μm). Relationships between 3D microarchitecture parameters and TBSμCT at different resolutions were evaluated using linear regression analysis. Significant correlations were observed between TBSμCT and 3D microarchitecture parameters, regardless of the resolution. Correlations were detected that were strongly to intermediately positive for connectivity density (0.711≤r(2)≤0.752) and trabecular number (0.584≤r(2)≤0.648) and negative for trabecular space (-0.407 ≤r(2)≤-0.491), up to a pixel size of 1023μm. In addition, TBSμCT values were strongly correlated between each other (0.77≤r(2)≤0.96). Study results show that the correlations between TBSμCT at 93μm and 3D microarchitecture parameters are weakly impacted by the degradation of image resolution and the presence of noise.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
Due to SNR constraints, current "bright-blood" 3D coronary MRA approaches still suffer from limited spatial resolution when compared to conventional x-ray coronary angiography. Recent 2D fast spin-echo black-blood techniques maximize signal for coronary MRA at no loss in image spatial resolution. This suggests that the extension of black-blood coronary MRA with a 3D imaging technique would allow for a further signal increase, which may be traded for an improved spatial resolution. Therefore, a dual-inversion 3D fast spin-echo imaging sequence and real-time navigator technology were combined for high-resolution free-breathing black-blood coronary MRA. In-plane image resolution below 400 microm was obtained. Magn Reson Med 45:206-211, 2001.
Resumo:
The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the ¿a trous¿ algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSATTM images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Resumo:
Spatial resolution is a key parameter of all remote sensing satellites and platforms. The nominal spatial resolution of satellites is a well-known characteristic because it is directly related to the area in ground that represents a pixel in the detector. Nevertheless, in practice, the actual resolution of a specific image obtained from a satellite is difficult to know precisely because it depends on many other factors such as atmospheric conditions. However, if one has two or more images of the same region, it is possible to compare their relative resolutions. In this paper, a wavelet-decomposition-based method for the determination of the relative resolution between two remotely sensed images of the same area is proposed. The method can be applied to panchromatic, multispectral, and mixed (one panchromatic and one multispectral) images. As an example, the method was applied to compute the relative resolution between SPOT-3, Landsat-5, and Landsat-7 panchromatic and multispectral images taken under similar as well as under very different conditions. On the other hand, if the true absolute resolution of one of the images of the pair is known, the resolution of the other can be computed. Thus, in the last part of this paper, a spatial calibrator that is designed and constructed to help compute the absolute resolution of a single remotely sensed image is described, and an example of its use is presented.
Resumo:
This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.
Resumo:
The purpose of this study was to compare inter-observer agreement of Stratus™ OCT versus Spectralis™ OCT image grading in patients with neovascular age-related macular degeneration (AMD). Thirty eyes with neovascular AMD were examined with Stratus™ OCT and Spectralis™ OCT. Four different scan protocols were used for imaging. Three observers graded the images for the presence of various pathologies. Inter-observer agreement between OCT models was assessed by calculating intra-class correlation coefficients (ICC). In Stratus™ OCT highest interobserver agreement was found for subretinal fluid (ICC: 0.79), and in Spectralis™ OCT for intraretinal cysts (IRC) (ICC: 0.93). Spectralis™ OCT showed superior interobserver agreement for IRC and epiretinal membranes (ERM) (ICC(Stratus™): for IRC 0.61; for ERM 0.56; ICC(Spectralis™): for IRC 0.93; for ERM 0.84). Increased image resolution of Spectralis™ OCT did improve the inter-observer agreement for grading intraretinal cysts and epiretinal membranes but not for other retinal changes.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
Government agencies responsible for riparian environments are assessing the utility of remote sensing for mapping and monitoring environmental health indicators. The objective of this work was to evaluate IKONOS and Landsat-7 ETM+ imagery for mapping riparian vegetation health indicators in tropical savannas for a section of Keelbottom Creek, Queensland, Australia. Vegetation indices and image texture from IKONOS data were used for estimating percentage canopy cover (r2=0.86). Pan-sharpened IKONOS data were used to map riparian species composition (overall accuracy=55%) and riparian zone width (accuracy within 4 m). Tree crowns could not be automatically delineated due to the lack of contrast between canopies and adjacent grass cover. The ETM+ imagery was suited for mapping the extent of riparian zones. Results presented demonstrate the capabilities of high and moderate spatial resolution imagery for mapping properties of riparian zones, which may be used as riparian environmental health indicators