252 resultados para histogram


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul. Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP. Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes : - Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement. - Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées. Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a content based image retrieval (CBIR) system using the local colour and texture features of selected image sub-blocks and global colour and shape features of the image. The image sub-blocks are roughly identified by segmenting the image into partitions of different configuration, finding the edge density in each partition using edge thresholding, morphological dilation and finding the corner density in each partition. The colour and texture features of the identified regions are computed from the histograms of the quantized HSV colour space and Gray Level Co- occurrence Matrix (GLCM) respectively. A combined colour and texture feature vector is computed for each region. The shape features are computed from the Edge Histogram Descriptor (EHD). Euclidean distance measure is used for computing the distance between the features of the query and target image. Experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a region based image retrieval system using the local colour and texture features of image sub regions. The regions of interest (ROI) are roughly identified by segmenting the image into fixed partitions, finding the edge map and applying morphological dilation. The colour and texture features of the ROIs are computed from the histograms of the quantized HSV colour space and Gray Level co- occurrence matrix (GLCM) respectively. Each ROI of the query image is compared with same number of ROIs of the target image that are arranged in the descending order of white pixel density in the regions, using Euclidean distance measure for similarity computation. Preliminary experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a content based image retrieval (CBIR) system using the local colour and texture features of selected image sub-blocks and global colour and shape features of the image. The image sub-blocks are roughly identified by segmenting the image into partitions of different configuration, finding the edge density in each partition using edge thresholding, morphological dilation. The colour and texture features of the identified regions are computed from the histograms of the quantized HSV colour space and Gray Level Co- occurrence Matrix (GLCM) respectively. A combined colour and texture feature vector is computed for each region. The shape features are computed from the Edge Histogram Descriptor (EHD). A modified Integrated Region Matching (IRM) algorithm is used for finding the minimum distance between the sub-blocks of the query and target image. Experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is divided in to 9 chapters and deals with the modification of TiO2 for various applications include photocatalysis, thermal reaction, photovoltaics and non-linear optics. Chapter 1 involves a brief introduction of the topic of study. An introduction to the applications of modified titania systems in various fields are discussed concisely. Scope and objectives of the present work are also discussed in this chapter. Chapter 2 explains the strategy adopted for the synthesis of metal, nonmetal co-doped TiO2 systems. Hydrothermal technique was employed for the preparation of the co-doped TiO2 system, where Ti[OCH(CH3)2]4, urea and metal nitrates were used as the sources for TiO2, N and metals respectively. In all the co-doped systems, urea to Ti[OCH(CH3)2]4 was taken in a 1:1 molar ratio and varied the concentration of metals. Five different co-doped catalytic systems and for each catalysts, three versions were prepared by varying the concentration of metals. A brief explanation of physico-chemical techniques used for the characterization of the material was also presented in this chapter. This includes X-ray Diffraction (XRD), Raman Spectroscopy, FTIR analysis, Thermo Gravimetric Analysis, Energy Dispersive X-ray Analysis (EDX), Scanning Electron Microscopy(SEM), UV-Visible Diffuse Reflectance Spectroscopy (UV-Vis DRS), Transmission Electron Microscopy (TEM), BET Surface Area Measurements and X-ray Photoelectron Spectroscopy (XPS). Chapter 3 contains the results and discussion of characterization techniques used for analyzing the prepared systems. Characterization is an inevitable part of materials research. Determination of physico-chemical properties of the prepared materials using suitable characterization techniques is very crucial to find its exact field of application. It is clear from the XRD pattern that photocatalytically active anatase phase dominates in the calcined samples with peaks at 2θ values around 25.4°, 38°, 48.1°, 55.2° and 62.7° corresponding to (101), (004), (200), (211) and (204) crystal planes (JCPDS 21-1272) respectively. But in the case of Pr-N-Ti sample, a new peak was observed at 2θ = 30.8° corresponding to the (121) plane of the polymorph brookite. There are no visible peaks corresponding to dopants, which may be due to their low concentration or it is an indication of the better dispersion of impurities in the TiO2. Crystallite size of the sample was calculated from Scherrer equation byusing full width at half maximum (FWHM) of the (101) peak of the anatase phase. Crystallite size of all the co-doped TiO2 was found to be lower than that of bare TiO2 which indicates that the doping of metal ions having higher ionic radius into the lattice of TiO2 causes some lattice distortion which suppress the growth of TiO2 nanoparticles. The structural identity of the prepared system obtained from XRD pattern is further confirmed by Raman spectra measurements. Anatase has six Raman active modes. Band gap of the co-doped system was calculated using Kubelka-Munk equation and that was found to be lower than pure TiO2. Stability of the prepared systems was understood from thermo gravimetric analysis. FT-IR was performed to understand the functional groups as well as to study the surface changes occurred during modification. EDX was used to determine the impurities present in the system. The EDX spectra of all the co-doped samples show signals directly related to the dopants. Spectra of all the co-doped systems contain O and Ti as the main components with low concentrations of doped elements. Morphologies of the prepared systems were obtained from SEM and TEM analysis. Average particle size of the systems was drawn from histogram data. Electronic structures of the samples were identified perfectly from XPS measurements. Chapter 4 describes the photocatalytic degradation of herbicides Atrazine and Metolachlor using metal, non-metal co-doped titania systems. The percentage of degradation was analyzed by HPLC technique. Parameters such as effect of different catalysts, effect of time, effect of catalysts amount and reusability studies were discussed. Chapter 5 deals with the photo-oxidation of some anthracene derivatives by co-doped catalytic systems. These anthracene derivatives come underthe category of polycyclic aromatic hydrocarbons (PAH). Due to the presence of stable benzene rings, most of the PAH show strong inhibition towards biological degradation and the common methods employed for their removal. According to environmental protection agency, most of the PAH are highly toxic in nature. TiO2 photochemistry has been extensively investigated as a method for the catalytic conversion of such organic compounds, highlighting the potential of thereof in the green chemistry. There are actually two methods for the removal of pollutants from the ecosystem. Complete mineralization is the one way to remove pollutants. Conversion of toxic compounds to another compound having toxicity less than the initial starting compound is the second way. Here in this chapter, we are concentrating on the second aspect. The catalysts used were Gd(1wt%)-N-Ti, Pd(1wt%)-N-Ti and Ag(1wt%)-N-Ti. Here we were very successfully converted all the PAH to anthraquinone, a compound having diverse applications in industrial as well as medical fields. Substitution of 10th position of desired PAH by phenyl ring reduces the feasibility of photo reaction and produced 9-hydroxy 9-phenyl anthrone (9H9PA) as an intermediate species. The products were separated and purified by column chromatography using 70:30 hexane/DCM mixtures as the mobile phase and the resultant products were characterized thoroughly by 1H NMR, IR spectroscopy and GCMS analysis. Chapter 6 elucidates the heterogeneous Suzuki coupling reaction by Cu/Pd bimetallic supported on TiO2. Sol-Gel followed by impregnation method was adopted for the synthesis of Cu/Pd-TiO2. The prepared system was characterized by XRD, TG-DTG, SEM, EDX, BET Surface area and XPS. The product was separated and purified by column chromatography using hexane as the mobile phase. Maximum isolated yield of biphenyl of around72% was obtained in DMF using Cu(2wt%)-Pd(4wt%)-Ti as the catalyst. In this reaction, effective solvent, base and catalyst were found to be DMF, K2CO3 and Cu(2wt%)-Pd(4wt%)-Ti respectively. Chapter 7 gives an idea about the photovoltaic (PV) applications of TiO2 based thin films. Due to energy crisis, the whole world is looking for a new sustainable energy source. Harnessing solar energy is one of the most promising ways to tackle this issue. The present dominant photovoltaic (PV) technologies are based on inorganic materials. But the high material, low power conversion efficiency and manufacturing cost limits its popularization. A lot of research has been conducted towards the development of low-cost PV technologies, of which organic photovoltaic (OPV) devices are one of the promising. Here two TiO2 thin films having different thickness were prepared by spin coating technique. The prepared films were characterized by XRD, AFM and conductivity measurements. The thickness of the films was measured by Stylus Profiler. This chapter mainly concentrated on the fabrication of an inverted hetero junction solar cell using conducting polymer MEH-PPV as photo active layer. Here TiO2 was used as the electron transport layer. Thin films of MEH-PPV were also prepared using spin coating technique. Two fullerene derivatives such as PCBM and ICBA were introduced into the device in order to improve the power conversion efficiency. Effective charge transfer between the conducting polymer and ICBA were understood from fluorescence quenching studies. The fabricated Inverted hetero junction exhibited maximum power conversion efficiency of 0.22% with ICBA as the acceptor molecule. Chapter 8 narrates the third order order nonlinear optical properties of bare and noble metal modified TiO2 thin films. Thin films were fabricatedby spray pyrolysis technique. Sol-Gel derived Ti[OCH(CH3)2]4 in CH3CH2OH/CH3COOH was used as the precursor for TiO2. The precursors used for Au, Ag and Pd were the aqueous solutions of HAuCl4, AgNO3 and Pd(NO3)2 respectively. The prepared films were characterized by XRD, SEM and EDX. The nonlinear optical properties of the prepared materials were investigated by Z-Scan technique comprising of Nd-YAG laser (532 nm,7 ns and10 Hz). The non-linear coefficients were obtained by fitting the experimental Z-Scan plot with the theoretical plots. Nonlinear absorption is a phenomenon defined as a nonlinear change (increase or decrease) in absorption with increasing of intensity. This can be mainly divided into two types: saturable absorption (SA) and reverse saturable absorption (RSA). Depending on the pump intensity and on the absorption cross- section at the excitation wavelength, most molecules show non- linear absorption. With increasing intensity, if the excited states show saturation owing to their long lifetimes, the transmission will show SA characteristics. Here absorption decreases with increase of intensity. If, however, the excited state has strong absorption compared with that of the ground state, the transmission will show RSA characteristics. Here in our work most of the materials show SA behavior and some materials exhibited RSA behavior. Both these properties purely depend on the nature of the materials and alignment of energy states within them. Both these SA and RSA have got immense applications in electronic devices. The important results obtained from various studies are presented in chapter 9.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The consumers are becoming more concerned about food quality, especially regarding how, when and where the foods are produced (Haglund et al., 1999; Kahl et al., 2004; Alföldi, et al., 2006). Therefore, during recent years there has been a growing interest in the methods for food quality assessment, especially in the picture-development methods as a complement to traditional chemical analysis of single compounds (Kahl et al., 2006). The biocrystallization as one of the picture-developing method is based on the crystallographic phenomenon that when crystallizing aqueous solutions of dihydrate CuCl2 with adding of organic solutions, originating, e.g., from crop samples, biocrystallograms are generated with reproducible crystal patterns (Kleber & Steinike-Hartung, 1959). Its output is a crystal pattern on glass plates from which different variables (numbers) can be calculated by using image analysis. However, there is a lack of a standardized evaluation method to quantify the morphological features of the biocrystallogram image. Therefore, the main sakes of this research are (1) to optimize an existing statistical model in order to describe all the effects that contribute to the experiment, (2) to investigate the effect of image parameters on the texture analysis of the biocrystallogram images, i.e., region of interest (ROI), color transformation and histogram matching on samples from the project 020E170/F financed by the Federal Ministry of Food, Agriculture and Consumer Protection(BMELV).The samples are wheat and carrots from controlled field and farm trials, (3) to consider the strongest effect of texture parameter with the visual evaluation criteria that have been developed by a group of researcher (University of Kassel, Germany; Louis Bolk Institute (LBI), Netherlands and Biodynamic Research Association Denmark (BRAD), Denmark) in order to clarify how the relation of the texture parameter and visual characteristics on an image is. The refined statistical model was accomplished by using a lme model with repeated measurements via crossed effects, programmed in R (version 2.1.0). The validity of the F and P values is checked against the SAS program. While getting from the ANOVA the same F values, the P values are bigger in R because of the more conservative approach. The refined model is calculating more significant P values. The optimization of the image analysis is dealing with the following parameters: ROI(Region of Interest which is the area around the geometrical center), color transformation (calculation of the 1 dimensional gray level value out of the three dimensional color information of the scanned picture, which is necessary for the texture analysis), histogram matching (normalization of the histogram of the picture to enhance the contrast and to minimize the errors from lighting conditions). The samples were wheat from DOC trial with 4 field replicates for the years 2003 and 2005, “market samples”(organic and conventional neighbors with the same variety) for 2004 and 2005, carrot where the samples were obtained from the University of Kassel (2 varieties, 2 nitrogen treatments) for the years 2004, 2005, 2006 and “market samples” of carrot for the years 2004 and 2005. The criterion for the optimization was repeatability of the differentiation of the samples over the different harvest(years). For different samples different ROIs were found, which reflect the different pictures. The best color transformation that shows efficiently differentiation is relied on gray scale, i.e., equal color transformation. The second dimension of the color transformation only appeared in some years for the effect of color wavelength(hue) for carrot treated with different nitrate fertilizer levels. The best histogram matching is the Gaussian distribution. The approach was to find a connection between the variables from textural image analysis with the different visual criteria. The relation between the texture parameters and visual evaluation criteria was limited to the carrot samples, especially, as it could be well differentiated by the texture analysis. It was possible to connect groups of variables of the texture analysis with groups of criteria from the visual evaluation. These selected variables were able to differentiate the samples but not able to classify the samples according to the treatment. Contrarily, in case of visual criteria which describe the picture as a whole there is a classification in 80% of the sample cases possible. Herewith, it clearly can find the limits of the single variable approach of the image analysis (texture analysis).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major obstacle to processing images of the ocean floor comes from the absorption and scattering effects of the light in the aquatic environment. Due to the absorption of the natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion, and, as the vehicle moves, induce shadows in the scene. For this reason, the first step towards application of standard computer vision techniques to underwater imaging requires dealing first with these lighting problems. This paper analyses and compares existing methodologies to deal with low-contrast, nonuniform illumination in underwater image sequences. The reviewed techniques include: (i) study of the illumination-reflectance model, (ii) local histogram equalization, (iii) homomorphic filtering, and, (iv) subtraction of the illumination field. Several experiments on real data have been conducted to compare the different approaches

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El Glioblastoma multiforme (GBM), es el tumor cerebral más frecuente, con pronóstico grave y baja sensibilidad al tratamiento inicial. El propósito de este estudio fue evaluar si la Difusión en RM (IDRM), es un biomarcador temprano de respuesta tumoral, útil para tomar decisiones tempranas de tratamiento y para obtener información pronostica. Metodología La búsqueda se realizo en las bases de datos EMBASE, CENTRAL, MEDLINE; las bibliografías también fueron revisadas. Los artículos seleccionados fueron estudios observacionales (casos y controles, cohortes, corte transversal), no se encontró ningún ensayo clínico; todos los participante tenían diagnostico histopatológico de GBM, sometidos a resección quirúrgica y/o radio-quimioterapia y seguimiento de respuesta al tratamiento con IDRM por al menos 6 meses. Los datos extraídos de forma independiente fueron tipo de estudio, participantes, intervenciones, seguimiento, desenlaces (sobrevida, progresión/estabilización de la enfermedad, muerte) Resultados Quince estudios cumplieron los criterios de inclusión. Entre las técnicas empleadas de IDRM para evaluar respuesta radiológica al tratamiento, fueron histogramas del coeficiente aparente de difusion ADC (compararon valores inferiores a la media y el percentil 10 de ADC, con los valores superiores); encontrando en términos generales que un ADC bajo es un fuerte predictor de sobrevida y/o progresión del tumor. (Esto fue significativo en 5 estudios); mapas funcionales de difusion (FDM) (midieron el porcentaje de cambio de ADC basal vs pos tratamiento) que mostro ser un fuerte predictor de sobrevida en pacientes con progresión tumoral. DISCUSION Desafortunadamente la calidad de los estudios fue intermedia-baja lo que hace que la aplicabilidad de los estudios sea limitada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A technique is presented for locating and tracking objects in cluttered environments. Agents are randomly distributed across the image, and subsequently grouped around targets. Each agent uses a weightless neural network and a histogram intersection technique to score its location. The system has been used to locate and track a head in 320x240 resolution video at up to 15fps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.