917 resultados para Spatial data warehouse
Resumo:
Spatial data on species distributions are available in two main forms, point locations and distribution maps (polygon ranges and grids). The first are often temporally and spatially biased, and too discontinuous, to be useful (untransformed) in spatial analyses. A variety of modelling approaches are used to transform point locations into maps. We discuss the attributes that point location data and distribution maps must satisfy in order to be useful in conservation planning. We recommend that before point location data are used to produce and/or evaluate distribution models, the dataset should be assessed under a set of criteria, including sample size, age of data, environmental/geographical coverage, independence, accuracy, time relevance and (often forgotten) representation of areas of permanent and natural presence of the species. Distribution maps must satisfy additional attributes if used for conservation analyses and strategies, including minimizing commission and omission errors, credibility of the source/assessors and availability for public screening. We review currently available databases for mammals globally and show that they are highly variable in complying with these attributes. The heterogeneity and weakness of spatial data seriously constrain their utility to global and also sub-global scale conservation analyses.
Resumo:
Funds for this report and grant were provided to the Iowa Division of Criminal and Juvenile Justice Planning (CJJP) and Statistical Analysis Center, by the Justice Research and Statistics Association (JRSA) through a cooperative agreement entitled “Juvenile Justice Evaluation Resource Center” with the Office of Juvenile Justice and Delinquency Prevention (OJJDP), U.S. Department of Justice (DOJ).
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
The paper presents a novel method for monitoring network optimisation, based on a recent machine learning technique known as support vector machine. It is problem-oriented in the sense that it directly answers the question of whether the advised spatial location is important for the classification model. The method can be used to increase the accuracy of classification models by taking a small number of additional measurements. Traditionally, network optimisation is performed by means of the analysis of the kriging variances. The comparison of the method with the traditional approach is presented on a real case study with climate data.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Proyecto final de grado consistente en la explotación de un data warehouse para el análisis de información sobre el tránsito rodado de vehículos.
Resumo:
Diseño, elaboración y explotación de un data warehouse para una institución sanitaria.
Resumo:
Tutkielman tavoitteena oli selvittää, onko tutkielman tilaajan toteuttaman kannattavuusraportoinnin laatu käyttäjien mielestä riittävä. Kannattavuusraportointi on toteutettu data warehouse tekniikalla. Tutkielman tavoitteina oli myös määrittää, mitä ohjelmiston laatu tarkoittaa ja miten sitä voidaan arvioida. Tutkimuksessa käytettiin kvalitatiivista tutkimusmenetelmää. Laadun arviointiin käytetty aineisto kerättiin haastattelemalla seitsemäätoista kannattavuusraportoinnin aktiivikäyttäjää. Tutkielmassa ohjelmiston laatu tarkoittaa sen kykyä täyttää tai ylittää käyttäjiensä kohtuulliset toiveet ja odotukset. Laatua arvioitiin standardin ISO/IEC 9126 määrittelemällä kuudella laatuominaisuudella, jotka kuvaavat minimaalisella päällekkäisyydellä ohjelmiston laadun. Lisäksi arvioinnissa hyödynnettiin varsinaiseen standardiin kuulumatonta informatiivista liitettä, joka tarkentaa ISO/IEC 9126 standardissa esitettyjä laadun ominaispiirteitä. Tutkimuksen tuloksena voidaan todeta, että käyttäjien mukaan kannattavuusraportointi on tarpeeksi laadukas, sillä se pystyy tarjoamaan helppokäyttöisiä, oikeanmuotoisia raportteja riittävän hyvällä vasteajalla käyttäjien tarpeisiin. Tehokkaasta hyödyntämisestä voidaan päätellä data warehousen rakentamisen onnistuneen. Tutkimuksessa nousi esiin myös runsaasti kehittämis- ja parannusideoita, jotka toimivat yhtenä kehitystyön apuvälineenä tulevaisuudessa.
Resumo:
Spatial data representation and compression has become a focus issue in computer graphics and image processing applications. Quadtrees, as one of hierarchical data structures, basing on the principle of recursive decomposition of space, always offer a compact and efficient representation of an image. For a given image, the choice of quadtree root node plays an important role in its quadtree representation and final data compression. The goal of this thesis is to present a heuristic algorithm for finding a root node of a region quadtree, which is able to reduce the number of leaf nodes when compared with the standard quadtree decomposition. The empirical results indicate that, this proposed algorithm has quadtree representation and data compression improvement when in comparison with the traditional method.
Resumo:
Tesis (Maestría en Ciencias de la Administración con Especialidad en Sistemas) UANL
Resumo:
[Tesis] ( Maestría en Informática Administrativa con Especialidad en Procesos Productivos de Negocios) U.A.N.L.
Resumo:
A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.
Resumo:
Little research so far has been devoted to understanding the diffusion of grassroots innovation for sustainability across space. This paper explores and compares the spatial diffusion of two networks of grassroots innovations, the Transition Towns Network (TTN) and Gruppi di Acquisto Solidale (Solidarity Purchasing Groups – GAS), in Great Britain and Italy. Spatio-temporal diffusion data were mined from available datasets, and patterns of diffusion were uncovered through an exploratory data analysis. The analysis shows that GAS and TTN diffusion in Italy and Great Britain is spatially structured, and that the spatial structure has changed over time. TTN has diffused differently in Great Britain and Italy, while GAS and TTN have diffused similarly in central Italy. The uneven diffusion of these grassroots networks on the one hand challenges current narratives on the momentum of grassroots innovations, but on the other highlights important issues in the geography of grassroots innovations for sustainability, such as cross-movement transfers and collaborations, institutional thickness, and interplay of different proximities in grassroots innovation diffusion.