930 resultados para pattern-mixture model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Surgical ablation procedures for treating atrial fibrillation have been shown to be highly successful. However, the ideal ablation pattern still remains to be determined. This article reports on a systematic study of the effectiveness of the performance of different ablation line patterns. METHODS AND RESULTS: This study of ablation line patterns was performed in a biophysical model of human atria by combining basic lines: (i) in the right atrium: isthmus line, line between vena cavae and appendage line and (ii) in the left atrium: several versions of pulmonary vein isolation, connection of pulmonary veins, isthmus line, and appendage line. Success rates and the presence of residual atrial flutter were documented. Basic patterns yielded conversion rates of only 10-25 and 10-55% in the right and the left atria, respectively. The best result for pulmonary vein isolation was obtained when a single closed line encompassed all veins (55%). Combination of lines in the right/left atrium only led to a success rate of 65/80%. Higher rates, up to 90-100%, could be obtained if right and left lines were combined. The inclusion of a left isthmus line was found to be essential for avoiding uncommon left atrial flutter. CONCLUSION: Some patterns studied achieved a high conversion rate, although using a smaller number of lines than those of the Maze III procedure. The biophysical atrial model is shown to be effective in the search for promising alternative ablation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rats bearing the Yoshida AH-130 ascites hepatoma showed enhanced fractional rates of protein degradation in gastrocnemius muscle, heart, and liver, while fractional synthesis rates were similar to those in non-tumor bearing rats. This hypercatabolic pattern was associated with marked perturbations of the hormonal homeostasis and presence of tumor necrosis factor in the circulation. The daily administration of a goat anti-murine TNF IgG to tumor-bearing rats decreased protein degradation rates in skeletal muscle, heart, and liver as compared with tumor-bearing rats receiving a nonimmune goat IgG. The anti-TNF treatment was also effective in attenuating early perturbations in insulin and corticosterone homeostasis. Although these results suggest that tumor necrosis factor plays a significant role in mediating the changes in protein turnover and hormone levels elicited by tumor growth, the inability of such treatment to prevent a reduction in body weight implies that other mediators or tumor-related events were also involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The failure of current strategies to provide an explanation for controversial findings on the pattern of pathophysiological changes in Alzheimer's Disease (AD) motivates the necessity to develop new integrative approaches based on multi-modal neuroimaging data that captures various aspects of disease pathology. Previous studies using [18F]fluorodeoxyglucose positron emission tomography (FDG-PET) and structural magnetic resonance imaging (sMRI) report controversial results about time-line, spatial extent and magnitude of glucose hypometabolism and atrophy in AD that depend on clinical and demographic characteristics of the studied populations. Here, we provide and validate at a group level a generative anatomical model of glucose hypo-metabolism and atrophy progression in AD based on FDG-PET and sMRI data of 80 patients and 79 healthy controls to describe expected age and symptom severity related changes in AD relative to a baseline provided by healthy aging. We demonstrate a high level of anatomical accuracy for both modalities yielding strongly age- and symptom-severity- dependant glucose hypometabolism in temporal, parietal and precuneal regions and a more extensive network of atrophy in hippocampal, temporal, parietal, occipital and posterior caudate regions. The model suggests greater and more consistent changes in FDG-PET compared to sMRI at earlier and the inversion of this pattern at more advanced AD stages. Our model describes, integrates and predicts characteristic patterns of AD related pathology, uncontaminated by normal age effects, derived from multi-modal data. It further provides an integrative explanation for findings suggesting a dissociation between early- and late-onset AD. The generative model offers a basis for further development of individualized biomarkers allowing accurate early diagnosis and treatment evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size-advantage model (SAM) explains the temporal variation of energetic investment on reproductive structures (i.e. male and female gametes and reproductive organs) in long-lived hermaphroditic plants and animals. It proposes that an increase in the resources available to an organism induces a higher relative investment on the most energetically costly sexual structures. In plants, pollination interactions are known to play an important role in the evolution of floral features. Because the SAM directly concerns flower characters, pollinators are expected to have a strong influence on the application of the model. This hypothesis, however, has never been tested. Here, we investigate whether the identity and diversity of pollinators can be used as a proxy to predict the application of the SAM in exclusive zoophilous plants. We present a new approach to unravel the dynamics of the model and test it on several widespread Arum (Araceae) species. By identifying the species composition, abundance and spatial variation of arthropods trapped in inflorescences, we show that some species (i.e. A. cylindraceum and A. italicum) display a generalist reproductive strategy, relying on the exploitation of a low number of dipterans, in contrast to the pattern seen in the specialist A. maculatum (pollinated specifically by two fly species only). Based on the model presented here, the application of the SAM is predicted for the first two and not expected in the latter species, those predictions being further confirmed by allometric measures. We here demonstrate that while an increase in the female zone occurs in larger inflorescences of generalist species, this does not happen in species demonstrating specific pollinators. This is the first time that this theory is both proposed and empirically tested in zoophilous plants. Its overall biological importance is discussed through its application in other non-Arum systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Potocki-Lupski syndrome (PTLS) is associated with a microduplication of 17p11.2. Clinical features include multiple congenital and neurobehavioral abnormalities and autistic features. We have generated a PTLS mouse model, Dp(11)17/+, that recapitulates some of the physical and neurobehavioral phenotypes present in patients. Here, we investigated the social behavior and gene expression pattern of this mouse model in a pure C57BL/6-Tyr(c-Brd) genetic background. Dp(11)17/+ male mice displayed normal home-cage behavior but increased anxiety and increased dominant behavior in specific tests. A subtle impairment in the preference for a social target versus an inanimate target and abnormal preference for social novelty (the preference to explore an unfamiliar mouse versus a familiar one) was also observed. Our results indicate that these animals could provide a valuable model to identify the specific gene(s) that confer abnormal social behaviors and that map within this delimited genomic deletion interval. In a first attempt to identify candidate genes and for elucidating the mechanisms of regulation of these important phenotypes, we directly assessed the relative transcription of genes within and around this genomic interval. In this mouse model, we found that candidates genes include not only most of the duplicated genes, but also normal-copy genes that flank the engineered interval; both categories of genes showed altered expression levels in the hippocampus of Dp(11)17/+ mice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flow structures above vegetation canopies have received much attention within terrestrial and aquatic literature. This research has led to a good process understanding of mean and turbulent canopy flow structure. However, much of this research has focused on rigid or semi-rigid vegetation with relatively simple morphology. Aquatic macrophytes differ from this form, exhibiting more complex morphologies, predominantly horizontal posture in the flow and a different force balance. While some recent studies have investigated such canopies, there is still the need to examine the relevance and applicability of general canopy layer theory to these types of vegetation. Here, we report on a range of numerical experiments, using both semi-rigid and highly flexible canopies. The results for the semi-rigid canopies support existing canopy layer theory. However, for the highly flexible vegetation, the flow pattern is much more complex and suggests that a new canopy model may be required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss the main privacy issues around mobile business models and we envision new solutions having privacy protection as a main value proposition. We construct a framework to help analyze the situation and assume that a third party is necessary to warrant transactions between mobile users and m-commerce providers. We then use the business model canvas to describe a generic business model pattern for privacy third party services. This pattern is then illustrated in two different variations of a privacy business model, which we call privacy broker and privacy management software. We conclude by giving examples for each business model and by suggesting further directions of investigation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To characterize perifoveal intraretinal cavities observed around full-thickness macular holes (MH) using en face optical coherence tomography and to establish correlations with histology of human and primate maculae. DESIGN: Retrospective nonconsecutive observational case series. METHODS: Macular en face scans of 8 patients with MH were analyzed to quantify the areas of hyporeflective spaces, and were compared with macular flat mounts and sections from 1 normal human donor eye and 2 normal primate eyes (Macaca fascicularis). Immunohistochemistry was used to study the distribution of glutamine synthetase, expressed by Müller cells, and zonula occludens-1, a tight-junction protein. RESULTS: The mean area of hyporeflective spaces was lower in the inner nuclear layer (INL) than in the complex formed by the outer plexiform (OPL) and the Henle fiber layers (HFL): 5.0 × 10(-3) mm(2) vs 15.9 × 10(-3) mm(2), respectively (P < .0001, Kruskal-Wallis test). In the OPL and HFL, cavities were elongated with a stellate pattern, whereas in the INL they were rounded and formed vertical cylinders. Immunohistochemistry confirmed that Müller cells followed a radial distribution around the fovea in the frontal plane and a "Z-shaped" course in the axial plane, running obliquely in the OPL and HFL and vertically in the inner layers. In addition, zonula occludens-1 co-localized with Müller cells within the complex of OPL and HFL, indicating junctions in between Müller cells and cone axons. CONCLUSION: The dual profile of cavities around MHs correlates with Müller cell morphology and is consistent with the hypothesis of intra- or extracellular fluid accumulation along these cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fusion of bone marrow (BM) hematopoietic cells with hepatocytes to generate BM derived hepatocytes (BMDH) is a natural process, which is enhanced in damaged tissues. However, the reprogramming needed to generate BMDH and the identity of the resultant cells is essentially unknown. In a mouse model of chronic liver damage, here we identify a modification in the chromatin structure of the hematopoietic nucleus during BMDH formation, accompanied by the loss of the key hematopoietic transcription factor PU.1/Sfpi1 (SFFV proviral integration 1) and gain of the key hepatic transcriptional regulator HNF-1A homeobox A (HNF-1A/Hnf1a). Through genome-wide expression analysis of laser captured BMDH, a differential gene expression pattern was detected and the chromatin changes observed were confirmed at the level of chromatin regulator genes. Similarly, Tranforming Growth Factor-β1 (TGF-β1) and neurotransmitter (e.g. Prostaglandin E Receptor 4 [Ptger4]) pathway genes were over-expressed. In summary, in vivo BMDH generation is a process in which the hematopoietic cell nucleus changes its identity and acquires hepatic features. These BMDHs have their own cell identity characterized by an expression pattern different from hematopoietic cells or hepatocytes. The role of these BMDHs in the liver requires further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavior-based navigation of autonomous vehicles requires the recognition of the navigable areas and the potential obstacles. In this paper we describe a model-based objects recognition system which is part of an image interpretation system intended to assist the navigation of autonomous vehicles that operate in industrial environments. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using a rule-based cooperative expert system