992 resultados para image fusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is not uncommon in many image acquisition solutions to balance a trade off between obtaining high resolution images at very low frame rates or acquiring a burst of low resolution images at higher frame rates. This paper introduces a novel image fusion framework for producing a high resolution video by augmenting analysed motion in a low resolution video to a single high resolution image. Many application domains such as remote sensing, low radiation medical imaging and battlefield automation will benefit from this novel fusion framework. The results show that a captured high resolution 30 frames per second video can be produced with 95% cost reduction while maintaining 94% structural similarity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Image fusion is a formal framework which is expressed as means and tools for the alliance of multisensor, multitemporal, and multiresolution data. Multisource data vary in spectral, spatial and temporal resolutions necessitating advanced analytical or numerical techniques for enhanced interpretation capabilities. This paper reviews seven pixel based image fusion techniques - intensity-hue-saturation, brovey, high pass filter (HPF), high pass modulation (HPM), principal component analysis, fourier transform and correspondence analysis.Validation of these techniques on IKONOS data (Panchromatic band at I m spatial resolution and Multispectral 4 bands at 4 in spatial resolution) reveal that HPF and HPM methods synthesises the images closest to those the corresponding multisensors would observe at the high resolution level.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Urban population is growing at around 2.3 percent per annum in India. This is leading to urbanisation and often fuelling the dispersed development in the outskirts of urban and village centres with impacts such as loss of agricultural land, open space, and ecologically sensitive habitats. This type of upsurge is very much prevalent and persistent in most places, often inferred as sprawl. The direct implication of such urban sprawl is the change in land use and land cover of the region and lack of basic amenities, since planners are unable to visualise this type of growth patterns. This growth is normally left out in all government surveys (even in national population census), as this cannot be grouped under either urban or rural centre. The investigation of patterns of growth is very crucial from regional planning point of view to provide basic amenities in the region. The growth patterns of urban sprawl can be analysed and understood with the availability of temporal multi-sensor, multi-resolution spatial data. In order to optimise these spectral and spatial resolutions, image fusion techniques are required. This aids in integrating a lower spatial resolution multispectral (MSS) image (for example, IKONOS MSS bands of 4m spatial resolution) with a higher spatial resolution panchromatic (PAN) image (IKONOS PAN band of 1m spatial resolution) based on a simple spectral preservation fusion technique - the Smoothing Filter-based Intensity Modulation (SFIM). Spatial details are modulated to a co-registered lower resolution MSS image without altering its spectral properties and contrast by using a ratio between a higher resolution image and its low pass filtered (smoothing filter) image. The visual evaluation and statistical analysis confirms that SFIM is a superior fusion technique for improving spatial detail of MSS images with the preservation of spectral properties.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Pixel based image fusion entails combining geometric details of a high-resolution Panchromatic (PAN) image and spectral information of a low-resolution Multispectral (MS) image to produce images with highest spatial content while preserving the spectral information. This work reviews and implements six fusion techniques – À Trous algorithm based wavelet transform (ATW), Mulitresolution Analysis based Intensity Modulation, Gram Schmidt fusion, CN Spectral, Luminance Chrominance and High pass fusion (HPF) on IKONOS imagery having 1 m PAN and 4 m MS channels. Comparative performance analysis of techniques by various methods reveals that ATW followed by HPF perform best among all the techniques.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article presents the results of a single-day census of radiation therapy (RT) treatment and technology use in Australia. The primary aim of the study was to ascertain patterns of RT practice and technology in use across Australia. These data were primarily collated to inform curriculum development of academic programs, thereby ensuring that training is matched to workforce patterns of practice. Methods: The study design was a census method with all 59 RT centres in Australia being invited to provide quantitative summary data relating to patient case mix and technology use on a randomly selected but common date. Anonymous and demographic-free data were analysed using descriptive statistics. Results: Overall data were provided across all six Australian States by 29 centres of a possible 59, yielding a response rate of 49% and representing a total of 2743 patients. Findings from this study indicate the increasing use of emerging intensity-modulated radiotherapy (IMRT), image fusion and image-guided radiation therapy (IGRT) technology in Australian RT planning and delivery phases. IMRT in particular was used for 37% of patients, indicating a high uptake of the technology in Australia when compared to other published data. The results also highlight the resource-intensive nature of benign tumour radiotherapy. Conclusions: In the absence of routine national data collection, the single-day census method offers a relatively convenient means of measuring and tracking RT resource utilisation. Wider use of this tool has the potential to not only track trends in technology implementation but also inform evidence-based guidelines for referral and resource planning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

首先利用模糊C-均值聚类算法在多特征形成的特征空间上对图像进行区域分割,并在此基础上对区域进行多尺度小波分解;然后利用柯西函数构造区域的模糊相似度,应用模糊相似度及区域信息量构造加权因子,从而得到融合图像的小波系数;最后利用小波逆变换得到融合图像·采用均方根误差、峰值信噪比、熵、交叉熵和互信息5种准则评价融合算法的性能·实验结果表明,文中方法具有良好的融合特性·

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The introduction of functional data into the radiotherapy treatment planning process is currently the focus of significant commercial, technical, scientific and clinical development. The potential of such data from positron emission tomography (PET) was recognized at an early stage and was integrated into the radiotherapy treatment planning process through the use of image fusion software. The combination of PET and CT in a single system (PET/CT) to form an inherently fused anatomical and functional dataset has provided an imaging modality which could be used as the prime tool in the delineation of tumour volumes and the preparation of patient treatment plans, especially when integrated with virtual simulation. PET imaging typically using F-Fluorodeoxyglucose (F-FDG) can provide data on metabolically active tumour volumes. These functional data have the potential to modify treatment volumes and to guide treatment delivery to cells with particular metabolic characteristics. This paper reviews the current status of the integration of PET and PET/CT data into the radiotherapy treatment process. Consideration is given to the requirements of PET/CT data acquisition with reference to patient positioning aids and the limitations imposed by the PET/CT system. It also reviews the approaches being taken to the definition of functional/ tumour volumes and the mechanisms available to measure and include physiological motion into the imaging process. The use of PET data must be based upon a clear understanding of the interpretation and limitations of the functional signal. Protocols for the implementation of this development remain to be defined, and outcomes data based upon clinical trials are still awaited. © 2006 The British Institute of Radiology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Results are presented from a trial in which a real-time passive millimetre-wave camera was mounted on a landing craft. The vessel was operated on rivers in the UK, and imagery of surrounding terrain, structures, obstacles and other vessels was obtained. An IR camera was also used, and the differences in signatures of various features are discussed. Opportunities for image fusion are highlighted.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography.