913 resultados para Nearest Neighbor
Resumo:
Charged and neutral oxygen vacancies in the bulk and on perfect and defective surfaces of MgO are characterized as quantum-mechanical subsystems chemically bonded to the host lattice and containing most of the charge left by the removed oxygens. Attractors of the electron density appear inside the vacancy, a necessary condition for the existence of a subsystem according to the atoms in molecules theory. The analysis of the electron localization function also shows attractors at the vacancy sites, which are associated to a localization basin shared with the valence domain of the nearest oxygens. This polyatomic superanion exhibits chemical trends guided by the formal charge and the coordination of the vacancy. The topological approach is shown to be essential to understand and predict the nature and chemical reactivity of these objects. There is not a vacancy but a coreless pseudoanion that behaves as an activated host oxygen.
Resumo:
In this paper we examine whether access to markets had a significant influence onmigration choices of Spanish internal migrants in the inter-war years. We perform astructural contrast of a New Economic Geography model that focus on the forwardlinkage that links workers location choice with the geography of industrial production,one of the centripetal forces that drive agglomeration in the NEG models. The resultshighlight the presence of this forward linkage in the Spanish economy of the inter-warperiod. That is, we prove the existence of a direct relation between workers¿ localizationdecisions and the market potential of the host regions. In addition, the direct estimationof the values associated with key parameters in the NEG model allows us to simulatethe migratory flows derived from different scenarios of the relative size of regions andthe distances between them. We show that in Spain the power of attraction of theagglomerations grew as they increased in size, but the high elasticity estimated for themigration costs reduced the intensity of the migratory flows. This could help to explainthe apparently low intensity of internal migrations in Spain until its upsurge during the1920s. This also explains the geography of migrations in Spain during this period,which hardly affected the regions furthest from the large industrial agglomerations (i.e.,regions such as Andalusia, Estremadura and Castile-La Mancha) but had an intenseeffect on the provinces nearest to the principal centres of industrial development.
Resumo:
Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Plants such as Arabidopsis thaliana respond to foliar shade and neighbors who may become competitors for light resources by elongation growth to secure access to unfiltered sunlight. Challenges faced during this shade avoidance response (SAR) are different under a light-absorbing canopy and during neighbor detection where light remains abundant. In both situations, elongation growth depends on auxin and transcription factors of the phytochrome interacting factor (PIF) class. Using a computational modeling approach to study the SAR regulatory network, we identify and experimentally validate a previously unidentified role for long hypocotyl in far red 1, a negative regulator of the PIFs. Moreover, we find that during neighbor detection, growth is promoted primarily by the production of auxin. In contrast, in true shade, the system operates with less auxin but with an increased sensitivity to the hormonal signal. Our data suggest that this latter signal is less robust, which may reflect a cost-to-robustness tradeoff, a system trait long recognized by engineers and forming the basis of information theory.
Resumo:
Cancer is a reportable disease as stated in the Iowa Administrative Code. Cancer data are collected by the State Health Registry of Iowa, located at The University of Iowa in the College of Public Health’s Department of Epidemiology. The staff includes more than 50 people. Half of them, situated throughout the state, regularly visit hospitals, clinics, and medical laboratories in Iowa and neighbor-ing states to collect cancer data. Hospital cancer programs approved by the American College of Surgeons also report their data. A follow-up program tracks more than 99 percent of the cancer survivors diagnosed since 1973. This program provides regular updates for follow-up and survival. The Registry maintains the confidentiality of the patients, physicians, and hospitals providing data. In 2013 data will be collected on an estimated 17,300 new cancers among Iowa residents. In situ cases of bladder cancer are included in the estimates for bladder cancer, to be in agreement with the definition of reportable cases of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. Since 1973 the Iowa Registry has been funded by the SEER Program of the National Cancer Institute. Iowa represents rural and Midwestern populations and provides data included in many NCI publications. Beginning in 1990 about 5-10 percent of the Registry’s annual operating budget has been provided by the state of Iowa. Starting in 2003, the University of Iowa has also been providing cost-sharing funds. In addition the Registry receives funding through grants and contracts with university, state, and national researchers investigating cancer-related topics.
Resumo:
A model has been developed for evaluating grain size distributions in primary crystallizations where the grain growth is diffusion controlled. The body of the model is grounded in a recently presented mean-field integration of the nucleation and growth kinetic equations, modified conveniently in order to take into account a radius-dependent growth rate, as occurs in diffusion-controlled growth. The classical diffusion theory is considered, and a modification of this is proposed to take into account interference of the diffusion profiles between neighbor grains. The potentiality of the mean-field model to give detailed information on the grain size distribution and transformed volume fraction for transformations driven by nucleation and either interface- or diffusion-controlled growth processes is demonstrated. The model is evaluated for the primary crystallization of an amorphous alloy, giving an excellent agreement with experimental data. Grain size distributions are computed, and their properties are discussed.
Resumo:
Image registration has been proposed as an automatic method for recovering cardiac displacement fields from Tagged Magnetic Resonance Imaging (tMRI) sequences. Initially performed as a set of pairwise registrations, these techniques have evolved to the use of 3D+t deformation models, requiring metrics of joint image alignment (JA). However, only linear combinations of cost functions defined with respect to the first frame have been used. In this paper, we have applied k-Nearest Neighbors Graphs (kNNG) estimators of the -entropy (H ) to measure the joint similarity between frames, and to combine the information provided by different cardiac views in an unified metric. Experiments performed on six subjects showed a significantly higher accuracy (p < 0.05) with respect to a standard pairwise alignment (PA) approach in terms of mean positional error and variance with respect to manually placed landmarks. The developed method was used to study strains in patients with myocardial infarction, showing a consistency between strain, infarction location, and coronary occlusion. This paper also presentsan interesting clinical application of graph-based metric estimators, showing their value for solving practical problems found in medical imaging.
Resumo:
Plants propagate electrical signals in response to artificial wounding. However, little is known about the electrophysiological responses of the phloem to wounding, and whether natural damaging stimuli induce propagating electrical signals in this tissue. Here, we used living aphids and the direct current (DC) version of the electrical penetration graph (EPG) to detect changes in the membrane potential of Arabidopsis sieve elements (SEs) during caterpillar wounding. Feeding wounds in the lamina induced fast depolarization waves in the affected leaf, rising to maximum amplitude (c. 60 mV) within 2 s. Major damage to the midvein induced fast and slow depolarization waves in unwounded neighbor leaves, but only slow depolarization waves in non-neighbor leaves. The slow depolarization waves rose to maximum amplitude (c. 30 mV) within 14 s. Expression of a jasmonate-responsive gene was detected in leaves in which SEs displayed fast depolarization waves. No electrical signals were detected in SEs of unwounded neighbor leaves of plants with suppressed expression of GLR3.3 and GLR3.6. EPG applied as a novel approach to plant electrophysiology allows cell-specific, robust, real-time monitoring of early electrophysiological responses in plant cells to damage, and is potentially applicable to a broad range of plant-herbivore interactions.
Resumo:
Abstract
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Financial exploitation is the unauthorized and illegal use of an individual’s funds, property or resources and includes identity theft. Financial exploitation can be committed by a family member, friend, neighbor or a complete stranger.
Resumo:
OBJECTIVES: Inequalities and inequities in health are an important public health concern. In Switzerland, mortality in the general population varies according to the socio-economic position (SEP) of neighbourhoods. We examined the influence of neighbourhood SEP on presentation and outcomes in HIV-positive individuals in the era of combination antiretroviral therapy (cART). METHODS: The neighbourhood SEP of patients followed in the Swiss HIV Cohort Study (SHCS) 2000-2013 was obtained on the basis of 2000 census data on the 50 nearest households (education and occupation of household head, rent, mean number of persons per room). We used Cox and logistic regression models to examine the probability of late presentation, virologic response to cART, loss to follow-up and death across quintiles of neighbourhood SEP. RESULTS: A total of 4489 SHCS participants were included. Presentation with advanced disease [CD4 cell count <200 cells/μl or AIDS] and with AIDS was less common in neighbourhoods of higher SEP: the age and sex-adjusted odds ratio (OR) comparing the highest with the lowest quintile of SEP was 0.71 [95% confidence interval (95% CI) 0.58-0.87] and 0.59 (95% CI 0.45-0.77), respectively. An undetectable viral load at 6 months of cART was more common in the highest than in the lowest quintile (OR 1.52; 95% CI 1.14-2.04). Loss to follow-up, mortality and causes of death were not associated with neighbourhood SEP. CONCLUSION: Late presentation was more common and virologic response to cART less common in HIV-positive individuals living in neighbourhoods of lower SEP, but in contrast to the general population, there was no clear trend for mortality.