990 resultados para source images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The posterior circulation Acute Stroke Prognosis Early CT Score (pc-ASPECTS) quantifies the extent of early ischemic changes in the posterior circulation with a 10-point grading system. We hypothesized that pc-ASPECTS applied to CT angiography source images predicts functional outcome of patients in the Basilar Artery International Cooperation Study (BASICS).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Several significant studies have been made in recent decades toward understanding road traffic noise and its effects on residential balconies. These previous studies have used a variety of techniques such as theoretical models, scale models and measurements on real balconies. The studies have considered either road traffic noise levels within the balcony space or inside an adjacent habitable room or both. Previous theoretical models have used, for example, simplified specular reflection calculations, boundary element methods (BEM), adaptations of CoRTN or the use of Sabine Theory. This paper presents an alternative theoretical model to predict the effects of road traffic noise spatially within the balcony space. The model includes a specular reflection component by calculating up to 10 orders of source images. To account for diffusion effects, a two compartment radiosity component is utilised. The first radiosity compartment is the urban street, represented as a street with building facades on either side. The second radiosity compartment is the balcony space. The model is designed to calculate the predicted road traffic noise levels within the balcony space and is capable of establishing the effect of changing street and balcony geometries. Screening attenuation algorithms are included to determine the effects of solid balcony parapets and balcony ceiling shields.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A method of source localization in shallow water, based on subspace concept, is described. It is shown that a vector representing the source in the image space spanned by the direction vectors of the source images is orthogonal to the noise eigenspace of the covariance matrix. Computer simulation has shown that a horizontal array of eight sensors can accurately localize one or more uncorrelated sources in shallow water dominated by multipath propagation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces are captured at low-resolution and have a blurred appearance. Domain adaptation approaches are useful for transferring knowledge from the training (source) to the test (target) data when they have different attributes, minimizing target data labeling efforts in the process. This paper examines the use of transfer learning for efficient multi-view head pose classification with minimal target training data under three challenging situations: (i) where the range of head poses in the source and target images is different, (ii) where source images capture a stationary person while target images capture a moving person whose facial appearance varies under motion due to changing perspective, scale and (iii) a combination of (i) and (ii). On the whole, the presented methods represent novel transfer learning solutions employed in the context of multi-view head pose classification. We demonstrate that the proposed solutions considerably outperform the state-of-the-art through extensive experimental validation. Finally, the DPOSE dataset compiled for benchmarking head pose classification performance with moving persons, and to aid behavioral understanding applications is presented in this work.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT method

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Once admitted the advantages of object-based classification compared to pixel-based classification; the need of simple and affordable methods to define and characterize objects to be classified, appears. This paper presents a new methodology for the identification and characterization of objects at different scales, through the integration of spectral information provided by the multispectral image, and textural information from the corresponding panchromatic image. In this way, it has defined a set of objects that yields a simplified representation of the information contained in the two source images. These objects can be characterized by different attributes that allow discriminating between different spectral&textural patterns. This methodology facilitates information processing, from a conceptual and computational point of view. Thus the vectors of attributes defined can be used directly as training pattern input for certain classifiers, as for example artificial neural networks. Growing Cell Structures have been used to classify the merged information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose: To evaluate and compare the performance of Ripplet Type-1 transform and directional discrete cosine transform (DDCT) and their combinations for improved representation of MRI images while preserving its fine features such as edges along the smooth curves and textures. Methods: In a novel image representation method based on fusion of Ripplet type-1 and conventional/directional DCT transforms, source images were enhanced in terms of visual quality using Ripplet and DDCT and their various combinations. The enhancement achieved was quantified on the basis of peak signal to noise ratio (PSNR), mean square error (MSE), structural content (SC), average difference (AD), maximum difference (MD), normalized cross correlation (NCC), and normalized absolute error (NAE). To determine the attributes of both transforms, these transforms were combined to represent the entire image as well. All the possible combinations were tested to present a complete study of combinations of the transforms and the contrasts were evaluated amongst all the combinations. Results: While using the direct combining method (DDCT) first and then the Ripplet method, a PSNR value of 32.3512 was obtained which is comparatively higher than the PSNR values of the other combinations. This novel designed technique gives PSNR value approximately equal to the PSNR’s of parent techniques. Along with this, it was able to preserve edge information, texture information and various other directional image features. The fusion of DDCT followed by the Ripplet reproduced the best images. Conclusion: The transformation of images using Ripplet followed by DDCT ensures a more efficient method for the representation of images with preservation of its fine details like edges and textures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image fusion techniques are useful to integrate the geometric detail of a high-resolution panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MSS) image, particularly important for understanding land use dynamics at larger scale (1:25000 or lower), which is required by the decision makers to adopt holistic approaches for regional planning. Fused images can extract features from source images and provide more information than one scene of MSS image. High spectral resolution aids in identification of objects more distinctly while high spatial resolution allows locating the objects more clearly. The geoinformatics technologies with an ability to provide high-spatial-spectral-resolution data helps in inventorying, mapping, monitoring and sustainable management of natural resources. Fusion module in GRDSS, taking into consideration the limitations in spatial resolution of MSS data and spectral resolution of PAN data, provide high-spatial-spectral-resolution remote sensing images required for land use mapping on regional scale. GRDSS is a freeware GIS Graphic User Interface (GUI) developed in Tcl/Tk is based on command line arguments of GRASS (Geographic Resources Analysis Support System) with the functionalities for raster analysis, vector analysis, site analysis, image processing, modeling and graphics visualization. It has the capabilities to capture, store, process, analyse, prioritize and display spatial and temporal data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introducción: La DSA es el método de elección para el seguimiento de pacientes con aneurismas intracraneales embolizados; esta se puede asociar a complicaciones incapacitantes o mortales. La MRA se ha propuesto como método alternativo por menor costo y menos morbi-mortalidad, aunque su desempeño diagnóstico permanece en discusión debido al desarrollo de nuevos protocolos, resonadores más potentes y nuevas aplicaciones de la DSA. Metodología: Exploramos la literatura hasta la actualidad y comparamos el desempeño diagnóstico de la MRA con la DSA para detectar flujo residual posterior a la embolización terapéutica de aneurismas intracraneales. Realizamos una revisión sistemática de la literatura y meta-análisis basados en 34 artículos detectados en la búsqueda que incluyó las bases de datos PubMed, Scopus, ScIELO y BVS. Resultados: La TOF-MRA demostró sensibilidad de 86.8% (84.3%-89.1%) y especificidad de 91.2% (89%-93.1%); la SROC para TOF-MRA demostró un AUC de 0.95. El desempeño de la CE-MRA demostró sensibilidad de 88.1% (84.6%-91.1%) y especificidad de 89.1% (85.7%-91.9%); la SROC presentó una AUC de 0.93. El análisis estratificado por potencia del resonador encontró que la TOF-MRA tiene mejor desempeño con el resonador de 3T, aunque no es estadísticamente significativo. La concordancia interobservador con TOF-MRA y CE-MRA fue moderada a muy buena. Discusión: El desempeño diagnóstico de la MRA en el seguimiento de aneurismas intracraneales embolizados demostró ser bueno, con sensibilidad mayor a 84%, siendo ligeramente mejor con TOF-MRA, sin lograr reemplazar la DSA. Sin embargo, los resultados deben ser evaluados con precaución por la heterogeneidad de los resultados de los estudios incluidos. (Abreviaturas: DSA: Angiografía por Sustracción Digital; MRA: Angiografía por Resonancia Magnética; TOF-MRA: Angiorresonancia por Tiempo de Vuelo; CE-MRA: Angiorresonancia contrastada).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper a new method to compute saliency of source images is presented. This work is an extension to universal quality index founded by Wang and Bovik and improved by Piella. It defines the saliency according to the change of topology of quadratic tree decomposition between source images and the fused image. The saliency function provides higher weight for the tree nodes that differs more in the fused image in terms topology. Quadratic tree decomposition provides an easy and systematic way to add a saliency factor based on the segmented regions in the images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image fusion quality metrics have evolved from image processing quality metrics. They measure the quality of fused images by estimating how much localized information has been transferred from the source images into the fused image. However, this technique assumes that it is actually possible to fuse two images into one without any loss. In practice, some features must be sacrificed and relaxed in both source images. Relaxed features might be very important, like edges, gradients and texture elements. The importance of a certain feature is application dependant. This paper presents a new method for image fusion quality assessment. It depends on estimating how much valuable information has not been transferred.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The unsuitability of using classic mutual information measure as a performance measure for image fusion is discussed. Analytical proof that classic mutual information cannot be considered a measure for image fusion performance is provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An image fusion system accepts two source images and produces a 'better' fused image. The term 'better' differs from one context to another. In some contexts, it means holding more information. In other contexts, it means getting more accurate results or readings. In general, images hold more than just the color values. Histogram distribution, dynamic range of colors, and color maps are all as valuable as the color values presenting the pictorial information of the image. This paper studies the problems of fusing images from different domains. It proposes a method to extend the fusion algorithms to fuse image properties that define the interpretation of captured images as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multisource image fusion is usually achieved by repeatedly fusing source images in pairs. However, there is no guarantee on the delivered quality considering the amount of information to be squeezed into the same spatial dimension. This paper presents a fusion capacity measure and examines the limits at which fusing more images will not add further information. The fusion capacity index employs Mutual Information (MI) to measure how far the histogram of the examined image is from a uniformly distributed histogram of a saturated image.