24 resultados para image fusion

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a new method to compute saliency of source images is presented. This work is an extension to universal quality index founded by Wang and Bovik and improved by Piella. It defines the saliency according to the change of topology of quadratic tree decomposition between source images and the fused image. The saliency function provides higher weight for the tree nodes that differs more in the fused image in terms topology. Quadratic tree decomposition provides an easy and systematic way to add a saliency factor based on the segmented regions in the images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image fusion quality metrics have evolved from image processing quality metrics. They measure the quality of fused images by estimating how much localized information has been transferred from the source images into the fused image. However, this technique assumes that it is actually possible to fuse two images into one without any loss. In practice, some features must be sacrificed and relaxed in both source images. Relaxed features might be very important, like edges, gradients and texture elements. The importance of a certain feature is application dependant. This paper presents a new method for image fusion quality assessment. It depends on estimating how much valuable information has not been transferred.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unsuitability of using classic mutual information measure as a performance measure for image fusion is discussed. Analytical proof that classic mutual information cannot be considered a measure for image fusion performance is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An image fusion system accepts two source images and produces a 'better' fused image. The term 'better' differs from one context to another. In some contexts, it means holding more information. In other contexts, it means getting more accurate results or readings. In general, images hold more than just the color values. Histogram distribution, dynamic range of colors, and color maps are all as valuable as the color values presenting the pictorial information of the image. This paper studies the problems of fusing images from different domains. It proposes a method to extend the fusion algorithms to fuse image properties that define the interpretation of captured images as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an algebraic framework for multimodal image fusion. The framework derives algebraic constructs and equations that govern the fusion process. The derived equations serve as objective functions according to which image fusion algorithms and metrics can be tuned. The equations also prove the duality between image fusion algorithms and metrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile robots are providing great assistance operating in hazardous environments such as nuclear cores, battlefields, natural disasters, and even at the nano-level of human cells. These robots are usually equipped with a wide variety of sensors in order to collect data and guide their navigation. Whether a single robot operating all sensors or a swarm of cooperating robots operating their special sensors, the captured data can be too large to be transferred across limited resources (e.g. bandwidth, battery, processing, and response time) in hazardous environments. Therefore, local computations have to be carried out on board the swarming robots to assess the worthiness of captured data and the capacity of fused information in a certain spatial dimension as well as selection of proper combination of fusion algorithms and metrics. This paper introduces to the concepts of Type-I and Type-II fusion errors, fusion capacity, and fusion worthiness. These concepts together form the ladder leading to autonomous fusion systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multisource image fusion is usually achieved by repeatedly fusing source images in pairs. However, there is no guarantee on the delivered quality considering the amount of information to be squeezed into the same spatial dimension. This paper presents a fusion capacity measure and examines the limits at which fusing more images will not add further information. The fusion capacity index employs Mutual Information (MI) to measure how far the histogram of the examined image is from a uniformly distributed histogram of a saturated image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image fusion process merges two images into a single more informative image. Objective image fusion per- formance metrics rely primarily on measuring the amount of information transferred from each source image into the fused image. Objective image fusion metrics have evolved from image processing dissimilarity metrics. Additionally, researchers have developed many additions to image dissimilarity metrics in order to better value the local fusion worthy features in source images. This paper studies the evolution of objective image fusion performance metrics and their subjective and objective validation. It describes how a fusion performance metric evolves starting with image dissimilarity metrics, its realization into image fusion contexts, its localized weighting factors and the validation process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is not uncommon in many image acquisition solutions to balance a trade off between obtaining high resolution images at very low frame rates or acquiring a burst of low resolution images at higher frame rates. This paper introduces a novel image fusion framework for producing a high resolution video by augmenting analysed motion in a low resolution video to a single high resolution image. Many application domains such as remote sensing, low radiation medical imaging and battlefield automation will benefit from this novel fusion framework. The results show that a captured high resolution 30 frames per second video can be produced with 95% cost reduction while maintaining 94% structural similarity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the huge growth of the World Wide Web, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the images through automatically extracting visual information of the medical images, which is commonly known as content-based image retrieval (CBIR). Since each feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Meanwhile, experiments demonstrate that a special feature is not equally important for different image queries. Most of existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. Having considered that a special feature is not equally important for different image queries, the proposed query dependent feature fusion method can learn different feature fusion models for different image queries only based on multiply image samples provided by the user, and the learned feature fusion models can reflect the different importances of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the development of the internet, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the medical images in the content-based ways through automatically extracting visual information of the medical images. Since a single feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Furthermore, a special feature is not equally important for different image queries since a special feature has different importance in reflecting the content of different images. However, most existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, based on multiply query samples provided by the user, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. The proposed query dependent feature fusion method for medical image retrieval can learn different feature fusion models for different image queries, and the learned feature fusion models can reflect the different importance of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.