901 resultados para Subfractals, Subfractal Coding, Model Analysis, Digital Imaging, Pattern Recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The processing of human bodies is important in social life and for the recognition of another person's actions, moods, and intentions. Recent neuroimaging studies on mental imagery of human body parts suggest that the left hemisphere is dominant in body processing. However, studies on mental imagery of full human bodies reported stronger right hemisphere or bilateral activations. Here, we measured functional magnetic resonance imaging during mental imagery of bilateral partial (upper) and full bodies. Results show that, independently of whether a full or upper body is processed, the right hemisphere (temporo-parietal cortex, anterior parietal cortex, premotor cortex, bilateral superior parietal cortex) is mainly involved in mental imagery of full or partial human bodies. However, distinct activations were found in extrastriate cortex for partial bodies (right fusiform face area) and full bodies (left extrastriate body area). We propose that a common brain network, mainly on the right side, is involved in the mental imagery of human bodies, while two distinct brain areas in extrastriate cortex code for mental imagery of full and upper bodies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Imaging during a period of minimal myocardial motion is of paramount importance for coronary MR angiography (MRA). The objective of our study was to evaluate the utility of FREEZE, a custom-built automated tool for the identification of the period of minimal myocardial motion, in both a moving phantom at 1.5 T and 10 healthy adults (nine men, one woman; mean age, 24.9 years; age range, 21-32 years) at 3 T. CONCLUSION: Quantitative analysis of the moving phantom showed that dimension measurements approached those obtained in the static phantom when using FREEZE. In vitro, vessel sharpness, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were significantly improved when coronary MRA was performed during the software-prescribed period of minimal myocardial motion (p < 0.05). Consistent with these objective findings, image quality assessments by consensus review also improved significantly when using the automated prescription of the period of minimal myocardial motion. The use of FREEZE improves image quality of coronary MRA. Simultaneously, operator dependence can be minimized while the ease of use is improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leishmania parasites have been plaguing humankind for centuries as a range of skin diseases named the cutaneous leishmaniases (CL). Carried in a hematophagous sand fly, Leishmania usually infests the skin surrounding the bite site, causing a destructive immune response that may persist for months or even years. The various symptomatic outcomes of CL range from a benevolent self- healing reddened bump to extensive open ulcerations, resistant to treatment and resulting in life- changing disfiguration. Many of these more aggressive outcomes are geographically isolated within the habitats of certain Neotropical Leishmania species; where about 15% of cases experience metastatic complications. However, despite this correlation, genetic analysis has revealed no major differences between species causing the various disease forms. We have recently identified a cytoplasmic dsRNA virus within metastatic L. guyanensis parasites that acts as a potent innate immunogen capable of worsening lesionai inflammation and prolonging parasite survival. The dsRNA genome of Leishmania RNA virus (LRV) binds and stimulates Toll-Like-Receptor-3 (TLR3), inducing this destructive inflammation, which we speculate as a factor contributing to the development of metastatic disease. This thesis establishes the first experimental model of LRV-mediated leishmanial metastasis and investigates the role of non-TLR3 viral recognition pathways in LRV-mediated pathology. Viral dsRNA can be detected by various non-TLR3 pattern recognition receptors (PRR); two such PRR groups are the RLRs (Retinoic acid-inducible gene 1 like receptors) and the NLRs (nucleotide- binding domain, leucine-rich repeat containing receptors). The RLRs are designed to detect viral dsRNA in the cytoplasm, while the NLRs react to molecular "danger" signals of cell damage, often oligomerizing into molecular scaffolds called "inflammasomes" that activate a potent inflammatory cascade. Interestingly, we found that neither RLR signalling nor the inflammasome pathway had an effect on LRV-mediated pathology. In contrast, we found a dramatic inflammasome independent effect for the NLR family member, NLRP10, where a knockout mouse model showed little evidence of disease. This phenotype was mimicked in an NLR knockout with which NLRP10 is known to interact: NLRC2. As this pathway induces the chronic inflammatory cell lineage TH17, we investigated the role of its key chronic inflammatory cytokine, IL-17A, in human patients infected by L. guyanensis. Indeed, patients infected with LRV+ parasites had a significantly increased level of IL-17A in lesionai biopsies. Interestingly, LRV presence was also associated with a significant decrease in the correlate of protection, IFN-y. This association was repeated in our murine model, where after we were able to establish the first experimental model of LRV-dependent leishmanial metastasis, which was mediated by IL-17A in the absence of IFN-y. Finally, we tested a new inhibitor of IL-17A secretion, SR1001, and reveal its potential as a Prophylactic immunomodulator and potent parasitotoxic drug. Taken together, these findings provide a basis for anti-IL-17A as a feasible therapeutic intervention to prevent and treat the metastatic complications of cutaneous leishmaniasis. -- Les parasites Leishmania infectent l'homme depuis des siècles causant des affections cutanées, appelées leishmanioses cutanées (LC). Le parasite est transmis par la mouche des sables et réside dans le derme à l'endroit de la piqûre. Au niveau de la peau, le parasite provoque une réponse immunitaire destructrice qui peut persister pendant des mois voire des années. Les symptômes de LC vont d'une simple enflure qui guérit spontanément jusqu' à de vastes ulcérations ouvertes, résistantes aux traitements. Des manifestations plus agressives sont déterminées par les habitats géographiques de certaines espèces de Leishmania. Dans ces cas, environ 15% des patients développent des lésions métastatiques. Aucun «facteur métastatique» n'a encore été trouvé à ce jour dans ces espèces. Récemment, nous avons pu identifier un virus résidant dans certains parasites métastatiques présents en Guyane française (appelé Leishmania-virus, ou LV) et qui confère un avantage de survie à son hôte parasitaire. Ce virus active fortement la réponse inflammatoire, aggravant l'inflammation et prolongeant l'infection parasitaire. Afin de diagnostiquer, prévenir et traiter ces lésions, nous nous sommes intéressés à identifier les composants de la voie de signalisation anti-virale, responsables de la persistance de cette inflammation. Cette étude décrit le premier modèle expérimental de métastases de la leishmaniose induites par LV, et identifie plusieurs composants de la voie inflammatoire anti-virale qui facilite la pathologie métastatique. Contrairement à l'homme, les souris de laboratoire infectées par des Leishmania métastatiques (contenant LV, LV+) ne développent pas de lésions métastatiques et guérissent après quelques semaines d'infection. Après avoir analysé un groupe de patients atteints de leishmaniose en Guyane française, nous avons constaté que les personnes infectées avec les parasites métastatiques LV+ avaient des niveaux significativement plus faibles d'un composant immunitaire protecteur important, appelé l'interféron (IFN)-y. En utilisant des souris génétiquement modifiées, incapables de produire de l'IFN-y, nous avons observé de telles métastases. Après inoculation dans le coussinet plantaire de souris IFN-y7" avec des parasites LV+ ou LV-, nous avons démontré que seules les souris infectées avec des leishmanies ayant LV développent de multiples lésions secondaires sur la queue. Comme nous l'avons observé chez l'homme, ces souris sécrètent une quantité significativement élevée d'un composant inflammatoire destructeur, l'interleukine (IL)-17. IL-17 a été incriminée pour son rôle dans de nombreuses maladies inflammatoires chroniques. On a ainsi trouvé un rôle destructif similaire pour l'IL-17 dans la leishmaniose métastatique. Nous avons confirmé ce rôle en abrogeant IL-17 dans des souris IFN-y7- ce qui ralentit l'apparition des métastases. Nous pouvons donc conclure que les métastases de la leishmaniose sont induites par l'IL-17 en absence d'IFN-v. En analysant plus en détails les voies de signalisation anti-virale induites par LV, nous avons pu exclure d'autres voies d'activation de la réponse inflammatoire. Nous avons ainsi démontré que la signalisation par LV est indépendante de la signalisation inflammatoire de type « inflammasome ». En revanche, nous avons pu y lier plusieurs autres molécules, telles que NLRP10 et NLRC2, connues pour leur synergie avec les réponses inflammatoires. Cette nouvelle voie pourrait être la cible pour des médicaments inhibant l'inflammation. En effet, un nouveau médicament qui bloque la production d'IL-17 chez la souris s'est montré prometteur dans notre modèle : il a réduit le gonflement des lésions ainsi que la charge parasitaire, indiquant que la voie anti-virale /inflammatoire est une approche thérapeutique possible pour prévenir et traiter cette infection négligée.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of this thesis is studying how lesions in retina caused by diabetic retinopathy can be detected from color fundus images by using machine vision methods. Methods for equalizing uneven illumination in fundus images, detecting regions of poor image quality due toinadequate illumination, and recognizing abnormal lesions were developed duringthe work. The developed methods exploit mainly the color information and simpleshape features to detect lesions. In addition, a graphical tool for collecting lesion data was developed. The tool was used by an ophthalmologist who marked lesions in the images to help method development and evaluation. The tool is a general purpose one, and thus it is possible to reuse the tool in similar projects.The developed methods were tested with a separate test set of 128 color fundus images. From test results it was calculated how accurately methods classify abnormal funduses as abnormal (sensitivity) and healthy funduses as normal (specificity). The sensitivity values were 92% for hemorrhages, 73% for red small dots (microaneurysms and small hemorrhages), and 77% for exudates (hard and soft exudates). The specificity values were 75% for hemorrhages, 70% for red small dots, and 50% for exudates. Thus, the developed methods detected hemorrhages accurately and microaneurysms and exudates moderately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple sclerosis (MS), a variable and diffuse disease affecting white and gray matter, is known to cause functional connectivity anomalies in patients. However, related studies published to-date are post hoc; our hypothesis was that such alterations could discriminate between patients and healthy controls in a predictive setting, laying the groundwork for imaging-based prognosis. Using functional magnetic resonance imaging resting state data of 22 minimally disabled MS patients and 14 controls, we developed a predictive model of connectivity alterations in MS: a whole-brain connectivity matrix was built for each subject from the slow oscillations (<0.11Hz) of region-averaged time series, and a pattern recognition technique was used to learn a discriminant function indicating which particular functional connections are most affected by disease. Classification performance using strict cross-validation yielded a sensitivity of 82% (above chance at p<0.005) and specificity of 86% (p<0.01) to distinguish between MS patients and controls. The most discriminative connectivity changes were found in subcortical and temporal regions, and contralateral connections were more discriminative than ipsilateral connections. The pattern of decreased discriminative connections can be summarized post hoc in an index that correlates positively (ρ=0.61) with white matter lesion load, possibly indicating functional reorganisation to cope with increasing lesion load. These results are consistent with a subtle but widespread impact of lesions in white matter and in gray matter structures serving as high-level integrative hubs. These findings suggest that predictive models of resting state fMRI can reveal specific anomalies due to MS with high sensitivity and specificity, potentially leading to new non-invasive markers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectif de ce travail est le développement d'une méthode de caractérisation objective de la qualité d'image s'appliquant à des systèmes de mammographie analogique, utilisant un couple écran-film comme détecteur, et numérique, basé sur une technologie semi-conductrice, ceci en vue de la comparaison de leurs performances. La méthode développée tient compte de la gamme dynamique du détecteur, de la détectabilité de structures de haut contraste, simulant des microcalcifications, et de structures de bas contraste, simulant des opacités (nodules tumoraux). La méthode prend également en considération le processus de visualisation de l'image, ainsi que la réponse de l'observateur. Pour réaliser ceci, un objet-test ayant des propriétés proches de celles d'un sein comprimé, composé de différents matériaux équivalents aux tissus, allant du glandulaire à l'adipeux, et comprenant des zones permettant la simulation de structures de haut et bas contraste, ainsi que la mesure de la résolution et celle du bruit, a été développé et testé. L'intégration du processus de visualisation a été réalisée en utilisant une caméra CCD mesurant directement les paramètres de qualité d'image, à partir de l'image de l'objet-test, dans une grandeur physique commune au système numérique et analogique, à savoir la luminance arrivant sur l'oeil de l'observateur. L'utilisation d'une grandeur synthétique intégrant dans un même temps, le contraste, le bruit et la résolution rend possible une comparaison objective entre les deux systèmes de mammographie. Un modèle mathématique, simulant la réponse d'un observateur et intégrant les paramètres de base de qualité d'image, a été utilisé pour calculer la détectabilité de structures de haut et bas contraste en fonction du type de tissu sur lequel celles-ci se trouvent. Les résultats obtenus montrent qu'à dose égale la détectabilité des structures est significativement plus élevée avec le système de mammographie numérique qu'avec le système analogique. Ceci est principalement lié au fait que le bruit du système numérique est plus faible que celui du système analogique. Les résultats montrent également que la méthodologie, visant à comparer des systèmes d'imagerie numérique et analogique en utilisant un objet-test à large gamme dynamique ainsi qu'une caméra, peut être appliquée à d'autres modalités radiologiques, ainsi qu'à une démarche d'optimisation des conditions de lecture des images.<br/><br/>The goal of this work was to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality and patient dose. We propose a method that takes into account the dynamic range of the image detector and the detection of high contrast (for microcalcifications) and low contrast (for masses or tumoral nodules) structures. The method also addresses the problems of image visualization and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution, and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualization process in the image quality assessment. In this way the luminance reaching the viewer?s eyes can be controlled for both kinds of images. A global quantity describing image contrast, spatial resolution and noise, and expressed in terms of luminance at the camera, can then be used to compare the two technologies objectively. The quantity used was a mathematical model observer that calculates the detectability of high and low contrast structures as a function of the background tissue. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. This is mainly because the image noise is lower for the digital system than for the screen-film detector. The method of using a test object with a large dynamic range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimize the process of radiographic film reading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Behavior-based navigation of autonomous vehicles requires the recognition of the navigable areas and the potential obstacles. In this paper we describe a model-based objects recognition system which is part of an image interpretation system intended to assist the navigation of autonomous vehicles that operate in industrial environments. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using a rule-based cooperative expert system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a model-based objects recognition system which is part of an image interpretation system intended to assist autonomous vehicles navigation. The system is intended to operate in man-made environments. Behavior-based navigation of autonomous vehicles involves the recognition of navigable areas and the potential obstacles. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using CEES, the C++ embedded expert system shell developed in the Systems Engineering and Automatic Control Laboratory (University of Girona) as a specific rule-based problem solving tool. It has been especially conceived for supporting cooperative expert systems, and uses the object oriented programming paradigm

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring of sewage sludge has proved the presence of many polar anthropogenic pollutants since LC/MS techniques came into routine use. While advanced techniques may improve characterizations, flawed sample processing procedures, however, may disturb or disguise the presence and fate of many target compounds present in this type of complex matrix before analytical process starts. Freeze-drying or oven-drying, in combination with centrifugation or filtration as sample processing techniques were performed followed by visual pattern recognition of target compounds for assessment of pretreatment processes. The results shown that oven-drying affected the sludge characterization, while freeze-drying led to less analytical misinterpretations.