942 resultados para Geospatial Data Model
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.
Resumo:
Manual and low-tech well drilling techniques have potential to assist in reaching the United Nations' millennium development goal for water in sub-Saharan Africa. This study used publicly available geospatial data in a regression tree analysis to predict groundwater depth in the Zinder region of Niger to identify suitable areas for manual well drilling. Regression trees were developed and tested on a database for 3681 wells in the Zinder region. A tree with 17 terminal leaves provided a range of ground water depth estimates that were appropriate for manual drilling, though much of the tree's complexity was associated with depths that were beyond manual methods. A natural log transformation of groundwater depth was tested to see if rescaling dataset variance would result in finer distinctions for regions of shallow groundwater. The RMSE for a log-transformed tree with only 10 terminal leaves was almost half that of the untransformed 17 leaf tree for groundwater depths less than 10 m. This analysis indicated important groundwater relationships for commonly available maps of geology, soils, elevation, and enhanced vegetation index from the MODIS satellite imaging system.
Resumo:
Recently we have seen a large increase in the amount of geospatial data that is being published using RDF and Linked Data principles. Eorts such as the W3C Geo XG, and most recently the GeoSPARQL initiative are providing the necessary vocabularies to pub- lish this kind of information on the Web of Data. In this context it is necessary to develop applications that consume and take advantage of these geospatial datasets. In this paper we present map4rdf, a faceted browsing tool for exploring and visualizing RDF datasets enhanced with geospatial information.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
El proyecto que he realizado ha consistido en la creación de un sistema de información geográfica para el Campus Sur UPM, que puede servir de referencia para su implantación en cualquier otro campus universitario. Esta idea surge de la necesidad por parte de los usuarios de un campus de disponer de una herramienta que les permita consultar la información de los distintos lugares y servicios del campus, haciendo especial hincapié en su localización geográfica. Para ello ha sido necesario estudiar las tecnologías actuales que permiten implementar un sistema de información geográfica, dando lugar al sistema propuesto, que consiste en un conjunto de medios informáticos (hardware y software), que van a permitir al personal del campus obtener la información y localización de los elementos del campus desde su móvil. Tras realizar un análisis de los requisitos y funcionalidades que debía tener el sistema, el proyecto ha consistido en el diseño e implementación de dicho sistema. La información a consultar estará almacenada y disponible para su consulta en un equipo servidor accesible para el personal del campus. Para ello, durante la realización del proyecto, ha sido necesario crear un modelo de datos basado en el campus y cargar los datos geográficos de utilidad en una base de datos. Todo esto ha sido realizado mediante el producto software Smallword Core 4.2. Además, ha sido también necesario desplegar un software servidor que permita a los usuarios consultar dichos datos desde sus móviles vía WIFI o Internet, el producto utilizado para este fin ha sido Smallworld Geospatial Server 4.2. Para la realización de las consultas se han utilizado los servicios WMS(Web Map Service) y WFS(Web Feature Service) definidos por el OGC(Open Geospatial Consortium). Estos servicios están adaptados para la consulta de información geográfica. El sistema también está compuesto por una aplicación para dispositivos móviles con sistema operativo Android, que permite a los usuarios del sistema consultar y visualizar la información geográfica del campus. Dicha aplicación ha sido diseñada y programada a lo largo de la realización del proyecto. Para la realización de este proyecto también ha sido necesario un estudio del presupuesto que supondría una implantación real del sistema y el mantenimiento que implicaría tener el sistema actualizado. Por último, el proyecto incluye una breve descripción de las tecnologías futuras que podrían mejorar las funcionalidades del sistema: la realidad aumentada y el posicionamiento en el interior de edificios. ABSTRACT. The project I've done has been to create a geographic information system for the Campus Sur UPM, which can serve as a reference for implementation in any other college campus. This idea arises from the need for the campus users to have a tool that allows them to view information from different places and services, with particular emphasis on their geographical location. It has been necessary to study the current technologies that allow implementing a geographic information system, leading to the proposed system, which consists of a set of computer resources (hardware and software) that will allow campus users to obtain information and location of campus components from their mobile phones. Following an analysis of the requirements and functionalities that the system should have, the project involved the design and implementation of the system . The information will be stored and available on a computer server accessible to campus users. Accordingly, during the project, it was necessary to create a data model based on campus data and load this data in a database. All this has been done by Smallword Core 4.2 software product. In addition, it has also been necessary to deploy a server software that allows users to query the data from their phones via WIFI or Internet, the product used for this purpose has been Smallworld Geospatial Server 4.2 . To carry out the consultations have used the services WMS (Web Map Service) and WFS (Web Feature Service) defined by the OGC (Open Geospatial Consortium). These services are tailored to the geographic information retrieval. The system also consists of an application for mobile devices with Android operating system, which allows users to query and display geographic information related to the campus. This application has been designed and programmed over the project. For the realization of this project has also been necessary to study the budget that would be a real system implementation and the maintenance that would have the system updated. Finally, the project includes a brief description of future technologies that could improve the system's functionality: augmented reality and positioning inside the buildings.
Resumo:
The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.
Resumo:
Replication Data Management (RDM) aims at enabling the use of data collections from several iterations of an experiment. However, there are several major challenges to RDM from integrating data models and data from empirical study infrastructures that were not designed to cooperate, e.g., data model variation of local data sources. [Objective] In this paper we analyze RDM needs and evaluate conceptual RDM approaches to support replication researchers. [Method] We adapted the ATAM evaluation process to (a) analyze RDM use cases and needs of empirical replication study research groups and (b) compare three conceptual approaches to address these RDM needs: central data repositories with a fixed data model, heterogeneous local repositories, and an empirical ecosystem. [Results] While the central and local approaches have major issues that are hard to resolve in practice, the empirical ecosystem allows bridging current gaps in RDM from heterogeneous data sources. [Conclusions] The empirical ecosystem approach should be explored in diverse empirical environments.
Resumo:
Los modelos de termomecánica glaciar están definidos mediante sistemas de ecuaciones en derivadas parciales que establecen los principios básicos de conservación de masa, momento lineal y energía, acompañados por una ley constitutiva que define la relación entre las tensiones a las que está sometido el hielo glaciar y las deformaciones resultantes de las mismas. La resolución de estas ecuaciones requiere la definición precisa del dominio (la geometría del glaciar, obtenido a partir de medidas topográficas y de georradar), así como contar con un conjunto de condiciones de contorno, que se obtienen a partir de medidas de campo de las variables implicadas y que constituyen un conjunto de datos geoespaciales. El objetivo fundamental de esta tesis es desarrollar una serie de herramientas que nos permitan definir con precisión la geometría del glaciar y disponer de un conjunto adecuado de valores de las variables a utilizar como condiciones de contorno del problema. Para ello, en esta tesis se aborda la recopilación, la integración y el estudio de los datos geoespaciales existentes para la Península Hurd, en la Isla Livingston (Antártida), generados desde el año 1957 hasta la actualidad, en un sistema de información geográfica. Del correcto tratamiento y procesamiento de estos datos se obtienen otra serie de elementos que nos permiten realizar la simulación numérica del régimen termomecánico presente de los glaciares de Península Hurd, así como su evolución futura. Con este objetivo se desarrolla en primer lugar un inventario completo de datos geoespaciales y se realiza un procesado de los datos capturados en campo, para establecer un sistema de referencia común a todos ellos. Se unifican además todos los datos bajo un mismo formato estándar de almacenamiento e intercambio de información, generándose los metadatos correspondientes. Se desarrollan asimismo técnicas para la mejora de los procedimientos de captura y procesado de los datos, de forma que se minimicen los errores y se disponga de estimaciones fiables de los mismos. El hecho de que toda la información se integre en un sistema de información geográfica (una vez producida la normalización e inventariado de la misma) permite su consulta rápida y ágil por terceros. Además, hace posible efectuar sobre ella una serie de operaciones conducentes a la obtención de nuevas capas de información. El análisis de estos nuevos datos permite explicar el comportamiento pasado de los glaciares objeto de estudio y proporciona elementos esenciales para la simulación de su comportamiento futuro. ABSTRACT Glacier thermo-mechanical models are defined by systems of partial differential equations stating the basic principles of conservation of mass, momentum and energy, accompanied by a constitutive principle that defines the relationship between the stresses acting on the ice and the resulting deformations. The solution of these equations requires an accurate definition of the model domain (the geometry of the glacier, obtained from topographical and ground penetrating radar measurements), as well as a set of boundary conditions, which are obtained from measurements of the variables involved and define a set of geospatial data. The main objective of this thesis is to develop tools able to provide an accurate definition of the glacier geometry and getting a proper set of values for the variables to be used as boundary conditions of our problem. With the above aim, this thesis focuses on the collection, compilation and study of the geospatial data existing for the Hurd Peninsula on Livingston Island, Antarctica, generated since 1957 to present, into a geographic information system. The correct handling and processing of these data results on a new collection of elements that allow us to numerically model the present state and the future evolution of Hurd Peninsula glaciers. First, a complete inventory of geospatial data is developed and the captured data are processed, with the aim of establishing a reference system common to all collections of data. All data are stored under a common standard format, and the corresponding metadata are generated to facilitate the information exchange. We also develop techniques for the improvement of the procedures used for capturing and processing the data, such that the errors are minimized and better estimated. All information is integrated into a geographic information system (once produced the standardization and inventory of it). This allows easy and fast viewing and consulting of the data by third parties. Also, it is possible to carry out a series of operations leading to the production of new layers of information. The analysis of these new data allows to explain past glacier behavior, and provides essential elements for explaining its future evolution.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..
Resumo:
Conceptual modeling forms an important part of systems analysis. If this is done incorrectly or incompletely, there can be serious implications for the resultant system, specifically in terms of rework and useability. One approach to improving the conceptual modelling process is to evaluate how well the model represents reality. Emergence of the Bunge-Wand-Weber (BWW) ontological model introduced a platform to classify and compare the grammar of conceptual modelling languages. This work applies the BWW theory to a real world example in the health arena. The general practice computing group data model was developed using the Barker Entity Relationship Modelling technique. We describe an experiment, grounded in ontological theory, which evaluates how well the GPCG data model is understood by domain experts. The results show that with the exception of the use of entities to represent events, the raw model is better understood by domain experts
Resumo:
Even when data repositories exhibit near perfect data quality, users may formulate queries that do not correspond to the information requested. Users’ poor information retrieval performance may arise from either problems understanding of the data models that represent the real world systems, or their query skills. This research focuses on users’ understanding of the data structures, i.e., their ability to map the information request and the data model. The Bunge-Wand-Weber ontology was used to formulate three sets of hypotheses. Two laboratory experiments (one using a small data model and one using a larger data model) tested the effect of ontological clarity on users’ performance when undertaking component, record, and aggregate level tasks. The results indicate for the hypotheses associated with different representations but equivalent semantics that parsimonious data model participants performed better for component level tasks but that ontologically clearer data model participants performed better for record and aggregate level tasks.
Resumo:
The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.