993 resultados para NONLINEAR MAPPING
Resumo:
The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The objectives of this study were to detect quantitative trait loci (QTL) for protein content in soybean grown in two distinct tropical environments and to build a genetic map for protein content. One hundred eighteen soybean recombinant inbred lines (RIL), obtained from a cross between cultivars BARC 8 and Garimpo, were used. The RIL were cultivated in two distinct Brazilian tropical environments: Cascavel county, in Paraná, and Viçosa county, in Minas Gerais (24º57'S, 53º27'W and 20º45'S, 42º52'W, respectively). Sixty-six SSR primer pairs and 65 RAPD primers were polymorphic and segregated at a 1:1 proportion. Thirty poorly saturated linkage groups were obtained, with 90 markers and 41 nonlinked markers. For the lines cultivated in Cascavel, three QTL were mapped in C2, E and N linkage groups, which explained 14.37, 10.31 and 7.34% of the phenotypic variation of protein content, respectively. For the lines cultivated in Viçosa, two QTL were mapped in linkage groups G and #1, which explained 9.51 and 7.34% of the phenotypic variation of protein content. Based on the mean of the two environments, two QTL were identified: one in the linkage group E (9.90%) and other in the group L (7.11%). In order for future studies to consistently detect QTL effects of different environments, genotypes with greater stability should be used.
Resumo:
The global structural connectivity of the brain, the human connectome, is now accessible at millimeter scale with the use of MRI. In this paper, we describe an approach to map the connectome by constructing normalized whole-brain structural connection matrices derived from diffusion MRI tractography at 5 different scales. Using a template-based approach to match cortical landmarks of different subjects, we propose a robust method that allows (a) the selection of identical cortical regions of interest of desired size and location in different subjects with identification of the associated fiber tracts (b) straightforward construction and interpretation of anatomically organized whole-brain connection matrices and (c) statistical inter-subject comparison of brain connectivity at various scales. The fully automated post-processing steps necessary to build such matrices are detailed in this paper. Extensive validation tests are performed to assess the reproducibility of the method in a group of 5 healthy subjects and its reliability is as well considerably discussed in a group of 20 healthy subjects.
Resumo:
We investigate the spatial dependence of the exciton lifetimes in single ZnO nanowires. We have found that the free exciton and bound exciton lifetimes exhibit a maximum at the center of nanowires, while they decrease by 30% towards the tips. This dependence is explained by considering the cavity-like properties of the nanowires in combination with the Purcell effect. We show that the lifetime of the bound-excitons scales with the localization energy to the power of 3/2, which validates the model of Rashba and Gurgenishvili at the nanoscale.
Resumo:
The human brain is the most complex structure known. With its high number of cells, number of connections and number of pathways it is the source of every thought in the world. It consumes 25% of our oxygen and suffers very fast from a disruption of its supply. An acute event, like a stroke, results in rapid dysfunction referable to the affected area. A few minutes without oxygen and neuronal cells die and subsequently degenerate. Changes in the brains incoming blood flow alternate the anatomy and physiology of the brain. All stroke events leave behind a brain tissue lesion. To rapidly react and improve the prediction of outcome in stroke patients, accurate lesion detection and reliable lesion-based function correlation would be very helpful. With a number of neuroimaging and clinical data of cerebral injured patients this study aims to investigate correlations of structural lesion locations with sensory functions.
Resumo:
Glucose metabolism is difficult to image with cellular resolution in mammalian brain tissue, particularly with (18) fluorodeoxy-D-glucose (FDG) positron emission tomography (PET). To this end, we explored the potential of synchrotron-based low-energy X-ray fluorescence (LEXRF) to image the stable isotope of fluorine (F) in phosphorylated FDG (DG-6P) at 1 μm(2) spatial resolution in 3-μm-thick brain slices. The excitation-dependent fluorescence F signal at 676 eV varied linearly with FDG concentration between 0.5 and 10 mM, whereas the endogenous background F signal was undetectable in brain. To validate LEXRF mapping of fluorine, FDG was administered in vitro and in vivo, and the fluorine LEXRF signal from intracellular trapped FDG-6P over selected brain areas rich in radial glia was spectrally quantitated at 1 μm(2) resolution. The subsequent generation of spatial LEXRF maps of F reproduced the expected localization and gradients of glucose metabolism in retinal Müller glia. In addition, FDG uptake was localized to periventricular hypothalamic tanycytes, whose morphological features were imaged simultaneously by X-ray absorption. We conclude that the high specificity of photon emission from F and its spatial mapping at ≤1 μm resolution demonstrates the ability to identify glucose uptake at subcellular resolution and holds remarkable potential for imaging glucose metabolism in biological tissue. © 2012 Wiley Periodicals, Inc.
Resumo:
Although numerous positron emission tomography (PET) studies with (18) F-fluoro-deoxyglucose (FDG) have reported quantitative results on cerebral glucose kinetics and consumption, there is a large variation between the absolute values found in the literature. One of the underlying causes is the inconsistent use of the lumped constants (LCs), the derivation of which is often based on multiple assumptions that render absolute numbers imprecise and errors hard to quantify. We combined a kinetic FDG-PET study with magnetic resonance spectroscopic imaging (MRSI) of glucose dynamics in Sprague-Dawley rats to obtain a more comprehensive view of brain glucose kinetics and determine a reliable value for the LC under isoflurane anaesthesia. Maps of Tmax /CMRglc derived from MRSI data and Tmax determined from PET kinetic modelling allowed to obtain an LC-independent CMRglc . The LC was estimated to range from 0.33 ± 0.07 in retrosplenial cortex to 0.44 ± 0.05 in hippocampus, yielding CMRglc between 62 ± 14 and 54 ± 11 μmol/min/100 g, respectively. These newly determined LCs for four distinct areas in the rat brain under isoflurane anaesthesia provide means of comparing the growing amount of FDG-PET data available from translational studies.
Resumo:
The present study deals with the analysis and mapping of Swiss franc interest rates. Interest rates depend on time and maturity, defining term structure of the interest rate curves (IRC). In the present study IRC are considered in a two-dimensional feature space - time and maturity. Exploratory data analysis includes a variety of tools widely used in econophysics and geostatistics. Geostatistical models and machine learning algorithms (multilayer perceptron and Support Vector Machines) were applied to produce interest rate maps. IR maps can be used for the visualisation and pattern perception purposes, to develop and to explore economical hypotheses, to produce dynamic asset-liability simulations and for financial risk assessments. The feasibility of an application of interest rates mapping approach for the IRC forecasting is considered as well. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The objective of this work was to verify the existence of a lethal locus in a eucalyptus hybrid population, and to quantify the segregation distortion in the linkage group 3 of the Eucalyptus genome. A E. grandis x E. urophylla hybrid population, which segregates for rust resistance, was genotyped with 19 microsatellite markers belonging to linkage group 3 of the Eucalyptus genome. To quantify the segregation distortion, maximum likelihood (ML) models, specific to outbreeding populations, were used. These models consider the observed marker genotypes and the lethal locus viability as parameters. The ML solutions were obtained using the expectation‑maximization algorithm. A lethal locus in the linkage group 3 was verified and mapped, with high confidence, between the microssatellites EMBRA 189 e EMBRA 122. This lethal locus causes an intense gametic selection from the male side. Its map position is 25 cM from the locus which controls the rust resistance in this population.
Resumo:
In sentinel node (SN) biopsy, an interval SN is defined as a lymph node or group of lymph nodes located between the primary melanoma and an anatomically well-defined lymph node group directly draining the skin. As shown in previous reports, these interval SNs seem to be at the same metastatic risk as are SNs in the usual, classic areas. This study aimed to review the incidence, lymphatic anatomy, and metastatic risk of interval SNs. METHODS: SN biopsy was performed at a tertiary center by a single surgical team on a cohort of 402 consecutive patients with primary melanoma. The triple technique of localization was used-that is, lymphoscintigraphy, blue dye, and gamma-probe. Otolaryngologic melanoma and mucosal melanoma were excluded from this analysis. SNs were examined by serial sectioning and immunohistochemistry. All patients with metastatic SNs were recommended to undergo a radical selective lymph node dissection. RESULTS: The primary locations of the melanomas included the trunk (188), an upper limb (67), or a lower limb (147). Overall, 97 (24.1%) of the 402 SNs were metastatic. Interval SNs were observed in 18 patients, in all but 2 of whom classic SNs were also found. The location of the primary was truncal in 11 (61%) of the 18, upper limb in 5, and lower limb in 2. One patient with a dorsal melanoma had drainage exclusively in a cervicoscapular area that was shown on removal to contain not lymph node tissue but only a blue lymph channel without tumor cells. Apart from the interval SN, 13 patients had 1 classic SN area and 3 patients 2 classic SN areas. Of the 18 patients, 2 had at least 1 metastatic interval SN and 2 had a classic SN that was metastatic; overall, 4 (22.2%) of 18 patients were node-positive. CONCLUSION: We found that 2 of 18 interval SNs were metastatic: This study showed that preoperative lymphoscintigraphy must review all known lymphatic areas in order to exclude an interval SN.
Resumo:
Multi-center studies using magnetic resonance imaging facilitate studying small effect sizes, global population variance and rare diseases. The reliability and sensitivity of these multi-center studies crucially depend on the comparability of the data generated at different sites and time points. The level of inter-site comparability is still controversial for conventional anatomical T1-weighted MRI data. Quantitative multi-parameter mapping (MPM) was designed to provide MR parameter measures that are comparable across sites and time points, i.e., 1 mm high-resolution maps of the longitudinal relaxation rate (R1 = 1/T1), effective proton density (PD(*)), magnetization transfer saturation (MT) and effective transverse relaxation rate (R2(*) = 1/T2(*)). MPM was validated at 3T for use in multi-center studies by scanning five volunteers at three different sites. We determined the inter-site bias, inter-site and intra-site coefficient of variation (CoV) for typical morphometric measures [i.e., gray matter (GM) probability maps used in voxel-based morphometry] and the four quantitative parameters. The inter-site bias and CoV were smaller than 3.1 and 8%, respectively, except for the inter-site CoV of R2(*) (<20%). The GM probability maps based on the MT parameter maps had a 14% higher inter-site reproducibility than maps based on conventional T1-weighted images. The low inter-site bias and variance in the parameters and derived GM probability maps confirm the high comparability of the quantitative maps across sites and time points. The reliability, short acquisition time, high resolution and the detailed insights into the brain microstructure provided by MPM makes it an efficient tool for multi-center imaging studies.