839 resultados para Digital mapping -- Case studies -- Congresses
Resumo:
OpenStreetMap se inició en 2004 y ha crecido de forma paralela a los proyectos de software libre hasta convertirse en el ejemplo más veterano y de mayor envergadura dentro de lo que se conoce como información geográfica voluntaria (VGI en su acrónimo inglés). El auge del uso de este tipo de datos deja, sin embargo, ciertas preguntas abiertas como por ejemplo: ¿Hasta qué punto son fiables los datos así obtenidos? ¿Cuál es su calidad? En el presente trabajo se ha realizado una comparación de los datos geográficos producidos por voluntarios dentro del proyecto de colaboración OpenStreetMap, con los datos producidos por instituciones y armonizados dentro del proyecto Cartociudad. La intención de la comparación es evaluar la calidad de los primeros respecto de los segundos. Para ello se ha definido el término de calidad cartográfica y se han evaluado los diferentes elementos de calidad cartográfica de OpenStreetMap: precisión espacial y de atributos, compleción, calidad temporal y consistencia lógica. El trabajo se realiza con los datos a dos niveles: municipio y/o provincia de Valencia. Los resultados de este análisis muestran que OpenStreetMap tiene una precisión posicional y temporal más que adecuada para usos geocodificación y cálculo de rutas. Sin embargo la heterogeneidad de la cobertura de datos y ciertas inconsistencias internas pueden comprometer su uso. A pesar de ello, se destaca el potencial del proyecto y de una solución de cálculo de rutas óptimas (OpenRouteService) que utiliza con éxito los datos de OpenStreetMap
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Quantitative databases are limited to information identified as important by their creators, while databases containing natural language are limited by our ability to analyze large unstructured bodies of text. Leximancer is a tool that uses semantic mapping to develop concept maps from natural language. We have applied Leximancer to educational based pathology case notes to demonstrate how real patient records or databases of case studies could be analyzed to identify unique relationships. We then discuss how such analysis could be used to conduct quantitative analysis from databases such as the Coronary Heart Disease Database.
Structuring and moodleing a course: case studies at the polytechnic of Porto - School of engineering
Resumo:
This work presents a comparative study covering four different courses lectured at the Polytechnic of Porto - School of Engineering, in respect to the usage of a particular Learning Management System, i.e. Moodle, and its impact on students' results. Even though positive correlation factors exist, e.g. between the number of Moodle accesses versus the final exam grade obtained by each student, the explanation behind it may not be straightforward. Mapping this particular factor to course numbers reveals that the quality of the resources might be preponderant and not only their quantity. This paper also addresses teachers who used this platform as a complement to their courses (b-learning) and identifies some particular issues they should be aware in order to potentiate students' engagement and learning.
Resumo:
This work presents a comparative study covering four different courses lectured at the Polytechnic of Porto - School of Engineering, regarding the usage of a particular Learning Management System, i.e. Moodle, and its impact on students' results. This study addresses teachers who used this platform as a complement to their courses (b-learning) and identifies some particular issues in order to potentiate students' engagement and learning. Even though positive correlation factors exist, e.g. between the number of Moodle accesses versus the final exam grade obtained by each student, the explanation behind it may not be straightforward. Mapping this particular factor to course numbers reveals that the quality of the resources might be preponderant and not only their quantity. These results point to the fact that some dynamic resources might enlarge students' engagement.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This paper presents a study documenting the general trends in the programming techniques, aided behavioral thresholds, speech perception abilities, and overall behavior when converting children into processing strategy called HiResolution (HiRes), used with the Advanced Bionics Clarion II Cochlear Implant System.
Resumo:
Education, especially higher education, is considered vital for maintaining national and individual competitiveness in the global knowledge economy. Following the introduction of its “Free Education Policy” as early as 1947, Sri Lanka is now the best performer in basic education in the South Asian region, with a remarkable record in terms of high literacy rates and the achievement of universal primary education. However, access to tertiary education is a bottleneck, due to an acute shortage of university places. In an attempt to address this problem, the government of Sri Lanka has invested heavily in information and communications technologies (ICTs) for distance education. Although this has resulted in some improvement, the authors of this article identify several barriers which are still impeding successful participation for the majority of Sri Lankans wanting to study at tertiary level. These impediments include the lack of infrastructure/resources, low English language proficiency, weak digital literacy, poor quality of materials and insufficient provision of student support. In the hope that future implementations of ICT-enabled education programmes can avoid repeating the mistakes identified by their research in this Sri Lankan case, the authors conclude their paper with a list of suggested policy options.
Resumo:
The use of wireless local area networks, called WLANs, as well as the proliferation of the use of multimedia applications have grown rapidly in recent years. Some factors affect the quality of service (QoS) received by the user and interference is one of them. This work presents strategies for planning and performance evaluation through an empirical study of the QoS parameters of a voice over Internet Protocol (VoIP) application in an interference network, as well as the relevance in the design of wireless networks to determine the coverage area of an access point, taking into account several parameters such as power, jitter, packet loss, delay, and PMOS. Another strategy is based on a hybrid approach that considers measuring and Bayesian inference applied to wireless networks, taking into consideration QoS parameters. The models take into account a cross layer vision of networks, correlating aspects of the physical environment, on the signal propagation (power or distance) with aspects of VoIP applications (e.g., jitter and packet loss). Case studies were carried out for two indoor environments and two outdoor environments, one of them displaying main characteristics of the Amazon region (e.g., densely arboreous environments). This last test bed was carried out in a real system because the Government of the State of Pará has a digital inclusion program called NAVEGAPARÁ.
Resumo:
We compare the impacts across a range of criteria of local and regional procurement (LRP) relative to transoceanic shipment of food aid in Burkina Faso and Guatemala. We find that neither instrument dominates the other across all criteria in either country, although LRP commonly performs at least as well as transoceanic shipment with respect to timeliness, cost, market price impacts, satisfying recipients' preferences, food quality and safety, and in benefiting smallholder suppliers. LRP is plainly a valuable food assistance tool, but its advantages and disadvantages must be carefully weighed, compared, and prioritized depending on the context and program objectives. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Forested wetlands throughout the world are valuable habitats; especially in relatively species-poor northern regions, they can be considered biological hotspots. Unfortunately, these areas have been degraded and destroyed. In recent years, however, the biological importance of wetlands has been increasingly recognized, resulting in the desire to restore disturbed habitats or create in place of destroyed ones. Restoration work is taking place across the globe in a diversity of wetland types, and research must be conducted to determine successful techniques. As a result, two studies of the effects of wetland restoration and creation were conducted in forested wetlands in northern Michigan and southern Finland. In North America, northern white-cedar wetlands have been declining in area, despite attempts to regenerate them. Improved methods for successfully establishing northern white-cedar are needed; as a result, the target of the first study was to determine if creating microtopography could be beneficial for white-cedar recruitment and growth. In northern Europe, spruce swamp forests have become a threatened ecosystem due to extensive drainage for forestry. As part of the restoration of these habitats, i.e. rewetting through ditch blocking, Sphagnum mosses are considered to be a critical element to re-establish, and an in-depth analysis of how Sphagnum is responding to restoration in spruce swamp forests has not been previously done. As a result, the aim of the second study was to investigate the ecophysiological functioning of Sphagnum and feather mosses across a gradient of pristine, drained, and restored boreal spruce swamp forests.
Resumo:
The rise in Muslim terrorist activities has encouraged the West to reevaluate its understanding of Islam, prompting concern for Muslim women's rights. In search of education-based solutions, this project explores three case studies of Muslims living under different government types: a secular state with a primarily Muslim population (Turkey); a secular state with a significant Muslim minority population (France); and a Muslim state with a powerful religious influence (Afghanistan). The type of government plays a significant role in Muslim women's rights, and solutions must be based on individual aspects of each unique place where Muslims live today. The results show that education is a universal solution when accepted at all levels: governmental, communal, and the individual.