982 resultados para fusion method
Resumo:
Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.
Resumo:
The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denominador común de todas esas aplicaciones es la constante necesidad por aumentar la resolución y la calidad de las imágenes adquiridas. En algunas de dichas técnicas de imagen tridimensional se da una interesante situación: aunque que cada volumen adquirido no contiene información suficiente para representar el objeto bajo estudio dentro de los parámetros de calidad requeridos por algunas aplicaciones finales, el esquema de adquisición permite la obtención de varios volúmenes que representan diferentes vistas de dicho objeto, de tal forma que cada una de las vistas proporciona información complementaria acerca del mismo. En este tipo de situación es posible, mediante la combinación de varias de esas vistas, obtener una mejor comprensión del objeto que a partir de cada una de ellas por separado. En el contexto de esta Tesis Doctoral se ha propuesto, desarrollado y validado una nueva metodología de proceso de imágenes basada en la transformada wavelet disc¬reta para la combinación, o fusión, de varias vistas con información complementaria de un mismo objeto. El método de fusión propuesto aprovecha la capacidad de descom¬posición en escalas y orientaciones de la transformada wavelet discreta para integrar en un solo volumen toda la información distribuida entre el conjunto de vistas adquiridas. El trabajo se centra en dos modalidades diferentes de imagen biomédica que per¬miten obtener tales adquisiciones multi-vista. La primera es una variante de la micro¬scopía de fluorescencia, la microscopía de fluorescencia mediante lámina de luz, que se utiliza para el estudio del desarrollo temprano de embriones vivos en diferentes modelos animales, como el pez cebra o el erizo de mar. La segunda modalidad es la resonancia magnética nuclear con realce tardío, que constituye una valiosa herramienta para evaluar la viabilidad del tejido miocárdico en pacientes con diversas miocardiopatías. Como parte de este trabajo, el método propuesto ha sido aplicado y validado en am¬bas modalidades de imagen. En el caso de la aplicación a microscopía de fluorescencia, los resultados de la fusión muestran un mejor contraste y nivel de detalle en comparación con cualquiera de las vistas individuales y el método no requiere de conocimiento previo acerca la función de dispersión puntual del sistema de imagen. Además, los resultados se han comparado con otros métodos existentes. Con respecto a la aplicación a imagen de resonancia magnética con realce tardío, los volúmenes fusionados resultantes pre-sentan una mejora cuantitativa en la nitidez de las estructuras relevantes y permiten una interpretación más sencilla y completa de la compleja estructura tridimensional del tejido miocárdico en pacientes con cardiopatía isquémica. Para ambas aplicaciones los resultados de esta tesis se encuentran actualmente en uso en los centros clínicos y de investigación con los que el autor ha colaborado durante este trabajo. Además se ha puesto a libre disposición de la comunidad científica la implementación del método de fusión propuesto. Por último, se ha tramitado también una solicitud de patente internacional que cubre el método de visualización desarrollado para la aplicación de Resonancia Magnética Nuclear. Abstract Nowadays three dimensional imaging techniques are common in several fields, but es-pecially in biomedical imaging, where we can find a wide range of techniques including: Laser Scanning Confocal Microscopy, Laser Scanning Two Photon Microscopy, Light Sheet Fluorescence Microscopy, Magnetic Resonance Imaging, Positron Emission To-mography, Optical Coherence Tomography, 3D Ultrasound Imaging, etc. A common denominator of all those applications being the constant need for further increasing resolution and quality of the acquired images. Interestingly, in some of the mentioned three-dimensional imaging techniques a remarkable situation arises: while a single volume does not contain enough information to represent the object being imaged within the quality parameters required by the final application, the acquisition scheme allows recording several volumes which represent different views of a given object, with each of the views providing complementary information. In this kind of situation one can get a better understanding of the object by combining several views instead of looking at each of them separately. Within such context, in this PhD Thesis we propose, develop and test new image processing methodologies based on the discrete wavelet transform for the combination, or fusion, of several views containing complementary information of a given object. The proposed fusion method exploits the scale and orientation decomposition capabil¬ities of the discrete wavelet transform to integrate in a single volume all the available information distributed among the set of acquired views. The work focuses in two different biomedical imaging modalities which provide such multi-view datasets. The first one is a particular fluorescence microscopy technique, Light-Sheet Fluorescence Microscopy, used for imaging and gaining understanding of the early development of live embryos from different animal models (like zebrafish or sea urchin). The second is Delayed Enhancement Magnetic Resonance Imaging, which is a valuable tool for assessing the viability of myocardial tissue on patients suffering from different cardiomyopathies. As part of this work, the proposed method was implemented and then validated on both imaging modalities. For the fluorescence microscopy application, the fusion results show improved contrast and detail discrimination when compared to any of the individual views and the method does not rely on prior knowledge of the system’s point spread function (PSF). Moreover, the results have shown improved performance with respect to previous PSF independent methods. With respect to its application to Delayed Enhancement Magnetic Resonance Imaging, the resulting fused volumes show a quantitative sharpness improvement and enable an easier and more complete interpretation of complex three-dimensional scar and heterogeneous tissue information in ischemic cardiomyopathy patients. In both applications, the results of this thesis are currently in use in the clinical and research centers with which the author collaborated during his work. An imple¬mentation of the fusion method has also been made freely available to the scientific community. Finally, an international patent application has been filed covering the visualization method developed for the Magnetic Resonance Imaging application.
Resumo:
Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation‑based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi‑resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Among the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, have the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical‑based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Resumo:
Cognitive radio is a wireless technology aimed at improvingthe efficiency use of the radio-electric spectrum, thus facilitating a reductionin the load on the free frequency bands. Cognitive radio networkscan scan the spectrum and adapt their parameters to operate in the unoccupiedbands. To avoid interfering with licensed users operating on a givenchannel, the networks need to be highly sensitive, which is achieved byusing cooperative sensing methods. Current cooperative sensing methodsare not robust enough against occasional or continuous attacks. This articleoutlines a Group Fusion method that takes into account the behavior ofusers over the short and long term. On fusing the data, the method is basedon giving more weight to user groups that are more unanimous in their decisions.Simulations have been performed in a dynamic environment withinterferences. Results prove that when attackers are present (both reiterativeor sporadic), the proposed Group Fusion method has superior sensingcapability than other methods.
Resumo:
La radio cognitiva es una tecnología inalámbrica propuesta para usar eficientemente los recursos del espectro radioeléctrico permitiendo así reducir la carga existente en las bandas de frecuencia de uso libre.Las redes de radio cognitiva son capaces de escanear el espectro y adaptar sus parámetros para operar en las bandas no ocupadas. Para evitar interferir con usuarios con licencia que operan en un determinado canal, la sensibilidad de las redes tiene que ser muy alta. Ello se consigue con métodos de detección cooperativos. Los métodos de detección cooperativa actuales tienen una carencia de robustez ya sea frente a ataques puntuales o continuos.En este artículo presentamos un método de fusión por grupos que tiene presente el comportamiento de los usuarios a corto y largo plazo. Al realizar la fusión de los datos, el método se basa en dar mayor peso a los grupos de usuarios con mayor unanimidad en sus decisiones.Los resultados de las simulaciones prueban que en presencia de atacantes el método de fusión por grupos propuesto consigue una detección superior a otros métodos, cumpliendo los requisitos de sensibilidad mínimos de las redes de radio cognitiva incluso con un 12 de usuarios reiteradamente maliciosos o un 10 de atacantes puntuales.
Resumo:
Virtual screening is a central technique in drug discovery today. Millions of molecules can be tested in silico with the aim to only select the most promising and test them experimentally. The topic of this thesis is ligand-based virtual screening tools which take existing active molecules as starting point for finding new drug candidates. One goal of this thesis was to build a model that gives the probability that two molecules are biologically similar as function of one or more chemical similarity scores. Another important goal was to evaluate how well different ligand-based virtual screening tools are able to distinguish active molecules from inactives. One more criterion set for the virtual screening tools was their applicability in scaffold-hopping, i.e. finding new active chemotypes. In the first part of the work, a link was defined between the abstract chemical similarity score given by a screening tool and the probability that the two molecules are biologically similar. These results help to decide objectively which virtual screening hits to test experimentally. The work also resulted in a new type of data fusion method when using two or more tools. In the second part, five ligand-based virtual screening tools were evaluated and their performance was found to be generally poor. Three reasons for this were proposed: false negatives in the benchmark sets, active molecules that do not share the binding mode, and activity cliffs. In the third part of the study, a novel visualization and quantification method is presented for evaluation of the scaffold-hopping ability of virtual screening tools.
Resumo:
The evidence of successful growth of Mn-doped PbS (Pb(1-x)Mn(x)S) nanocrystals (NCs) in SiO(2)-Na(2)CO(3)-Al(2)O(3)-PbO(2)-B(2)O(3) template, using the fusion method, is reported on in this study. The as-grown Pb(1-x)Mn(x)S NC is characterized using optical absorption, electron paramagnetic resonance, and atomic force microscopy. The data are discussed in terms of two distinct scenarios, namely a core-doped and a shell-doped nanostructure. (C) 2008 Elsevier B.V. All rights reserved.