20 resultados para cartographic generalisation
em Universidad Politécnica de Madrid
Resumo:
Spatial Data Infrastructures have become a methodological and technological benchmark enabling distributed access to historical-cartographic archives. However, it is essential to offer enhanced virtual tools that imitate the current processes and methodologies that are carried out by librarians, historians and academics in the existing map libraries around the world. These virtual processes must be supported by a generic framework for managing, querying, and accessing distributed georeferenced resources and other content types such as scientific data or information. The authors have designed and developed support tools to provide enriched browsing, measurement and geometrical analysis capabilities, and dynamical querying methods, based on SDI foundations. The DIGMAP engine and the IBERCARTO collection enable access to georeferenced historical-cartographical archives. Based on lessons learned from the CartoVIRTUAL and DynCoopNet projects, a generic service architecture scheme is proposed. This way, it is possible to achieve the integration of virtual map rooms and SDI technologies bringing support to researchers within the historical and social domains.
Resumo:
Purpose: In this paper we study all settlements shown on the map of the Province of Madrid, sheet number 1 of AGE (Atlas Geográfico de España of Tomas Lopez 1804) and their correspondence with the current ones. This map is divided in to zones: Madrid and Almonacid de Zorita. Method: The steps followed in the methodology are as follow: 1. Geo-reference of maps with latitude and longitude framework. Move the historical longitude origin to the origin longitude of modern cartography. 2 Digitize of all population settlements or cities (97 on Madrid and 42 on Almonacid de Zorita), 3 Identify historic settlements or cities corresponding with current ones. 4. If the maps have the same orientation and scale, replace the coordinate transformation of historical settlements with a new one, by a translation in latitude and longitude equal to the calculated mean value of all ancient map points corresponding to the new. 5. Calculation of absolute accuracy of the two maps. 6 draw in the GIS, the settlements accuracy. Result: It was found that all AGE settlements have good correspondence with current, ie only 27 settlements lost in Madrid and 2 in Almonacid. The average accuracy is 2.3 and 5.7 km to Madrid and Almonacid de Zorita respectively. Discussion & Conclusion: The final accuracy map obtained shows that there is less error in the middle of the map. This study highlights the great work done by Tomas Lopez in performing this mapping without fieldwork. This demonstrates the great value that has been the work of Tomas Lopez in the history of cartography.
Resumo:
This article presents a cartographic system to facilitate cooperative manoeuvres among autonomous vehicles in a well-known environment. The main objective is to design an extended cartographic system to help in the navigation of autonomous vehicles. This system has to allow the vehicles not only to access the reference points needed for navigation, but also noticeable information such as the location and type of traffic signals, the proximity to a crossing, the streets en route, etc. To do this, a hierarchical representation of the information has been chosen, where the information has been stored in two levels. The lower level contains the archives with the Universal Traverse Mercator (UTM) coordinates of the points that define the reference segments to follow. The upper level contains a directed graph with the relational database in which streets, crossings, roundabouts and other points of interest are represented. Using this new system it is possible to know when the vehicle approaches a crossing, what other paths arrive at that crossing, and, should there be other vehicles circulating on those paths and arriving at the crossing, which one has the highest priority. The data obtained from the cartographic system is used by the autonomous vehicles for cooperative manoeuvres.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.
Resumo:
At the present time almost all map libraries on the Internet are image collections generated by the digitization of early maps. This type of graphics files provides researchers with the possibility of accessing and visualizing historical cartographic information keeping in mind that this information has a degree of quality that depends upon elements such as the accuracy of the digitization process and proprietary constraints (e.g. visualization, resolution downloading options, copyright, use constraints). In most cases, access to these map libraries is useful only as a first approach and it is not possible to use those maps for scientific work due to the sparse tools available to measure, match, analyze and/or combine those resources with different kinds of cartography. This paper presents a method to enrich virtual map rooms and provide historians and other professional with a tool that let them to make the most of libraries in the digital era.
Resumo:
In this paper the Alpine cleavage affecting the Permo-Triassic series of the Espadan Range (Castellón) is studied. Cleavage affects to argillites and sandstones in Saxonian and Buntsandstein facies. At cartographic scale it is linked with the Espadan box anticline with constant ONO-ESE trend. At microscoscopic scale it constitutes a “spaced cleavage” with a predominance of pressure solution and passive rotation mechanisms. At outcrop scale the cleavage characterizes by a sigmoidal geometry linked both the post-cleavage flexural slip as a cleavage-related flexural flow mechanism. The proposed kinematic model to explain its origin includes three main stages: 1) incipient development of cleavage linked to layer-parallel shortening, 2) buckling and increasing of cleavage penetrativity and 3) folfing amplification and layer-parallel shear. RESUMEN Se estudia la esquistosidad alpina que afecta a la serie Permo-Triásica de la Sierra de Espadán, (Castellón). La esquistosidad afecta a los tramos argilíticos y areniscosos en facies Saxoniense y Buntsandstein, con distinto grado de penetratividad. A escala cartográfica se asocia al anticlinal de Espadán con geometría en cofre y orientación ONO-ESE. A escala microestructural se clasifica como esquistosidad espaciada con predominio de los mecanismos de disolución por presión y rotación mecánica de filosilicatos. A escala de afloramiento destaca la geometría sigmoidal de las superficies de esquistosidad atribuida tanto a un mecanismo post-esquistoso de flexodeslizamiento en las capas competentes como a flexofluencia sin-esquistosa en capas incompetentes. El modelo cinemático para su génesis contempla tres estadios: 1) desarrollo incipiente de esquistosidad en relación a acortamiento paralelo a las capas, 2) buckling e incremento del grado de penetratividad y 3) amplificación de los pliegues y cizalla simple paralela a las capas
Resumo:
The underground cellars of the Duero River basin are part of spread and damaged agricultural landscape which is in danger of disappearing. These architectural complexes are allocated next to small towns. Constructions are mostly dug in the ground with a gallery down or "barrel" strait through which you access the cave or cellar. This wider space is used to make and store wine. Observation and detection of the winery both on the outside and underground is essential to make an inventory of the rural heritage. Geodetection is a non-invasive technique, suitable to determinate with precision buried structures in the ground. The undertaken works include LIDAR survey techniques, GNSS and GPR obtained data. The results are used to identify with centimetric precision construction elements forming the winery. Graphic and cartographic obtained documents allow optimum visualization of the studied field and can be used in the reconstruction of the place.
Resumo:
Los sistemas de registro aerotransportados que utilizan láser (LiDAR) se están convirtiendo en el principal instrumental para la recogida de la información cartográfica debido, principalmente, a la gran densidad de puntos, precisión alcanzada y rapidez en la obtención de modelos digitales. Sin embargo, sería importante disponer de algoritmos que permitan filtrar la información, seleccionando aquellos puntos medidos en zonas deseadas. Cuando se miden zonas urbanas, los elementos más importantes son las edificaciones. Por ello, se propone un nuevo algoritmo que permite clasificar y diferenciar aquellos puntos medidos sobre edificios, extrayendo, como resultado, el límite exterior que definen, de tal forma que se podría calcular la zona edificada. Abstarct: Registration systems using airborne laser (LIDAR) are becoming the main device for the collection of cartographic information, mainly due to the high density of points, accuracy and rate achieved in obtaining digital models. However, it would be important to have algorithms that filter the information by selecting those points measured in targeted areas. When measuring urban areas, buildings are the most important objects. Therefore, a new algorithm is proposed to classify those measured points on buildings and to compute their outer boundaries, so the built up area can be computed.
Resumo:
Se presenta este artículo con el ánimo de enumerar y estudiar diferentes algoritmos que tratan la generalización de datos cartográficos vectoriales de zonas urbanas, debido a que en ellas se concentran la mayoría de los conflictos que se pueden encontrar en los procesos de generalización cartográfica. A pesar de que la generalización es uno de los procedimientos más difíciles de automatizar, existen herramientas que implementan estos algoritmos y ofrecen resultados satisfactorios, aunque ninguna de ellas es capaz de automatizar por completo el proceso de generalización. A continuación, se incluyen las pruebas realizadas al respecto, describiendo y analizando los resultados obtenidos, estableciendo una comparativa con trabajos realizados por diferentes autores. Se concluye el documento valorando los posibles trabajos futuros para solventar la problemática de la generalización cartográfica. Este estudio se encuentra en el marco del proyecto CENIT España Virtual. Abstract: This article is focused in studying different algorithms about generalization of vector map data from urban areas, because most of the conflicts in the processes of cartographic generalization are concentrated in these areas. Although generalization is one of the most difficult processes to automate, there are tools that implement these algorithms and provide satisfactory results. However,none of them can automate the process of generalization completely. Then tests in describing and analyzing the results are included, establishing a comparison with works of various authors. The document concludes by assessing the possible future works to solve the problem of cartographic generalization. This study is within the CENIT project España Virtual.
Resumo:
Se presenta un estudio de algoritmos que ofrecen resultados óptimos en cuanto a lo que a la generalización vectorial de entidades lineales se refiere. Este estudio se encuentra dentro del marco del proyecto CENIT España Virtual para la investigación de nuevos algoritmos de procesado cartográfico. La generalización constituye uno de los procesos cartográficos más complejos, cobrando su mayor importancia a la hora de confeccionar mapas derivados a partir de otros a mayores escalas. La necesidad de una generalización se hace patente ante la imposibilidad de representar la realidad en su totalidad, teniendo ésta que ser limitada o reducida para la posterior elaboración del mapa, manteniendo, eso sí, las características esenciales del espacio geográfico cartografiado. La finalidad, por tanto, es obtener una imagen simplificada pero representativa de la realidad. Debido a que casi el ochenta por ciento de la cartografía vectorial está compuesta por elementos lineales, la investigación se centra en aquellos algoritmos capaces de procesar y actuar sobre éstos, demostrando además que su aplicación puede extenderse al tratamiento de elementos superficiales ya que son tratados a partir de la línea cerrada que los define. El estudio, además, profundiza en los procesos englobados dentro de la exageración lineal que pretenden destacar o enfatizar aquellos rasgos de entidades lineales sin los que la representatividad de nuestro mapa se vería mermada. Estas herramientas, acompañadas de otras más conocidas como la simplificación y el suavizado de líneas, pueden ofrecer resultados satisfactorios dentro de un proceso de generalización. Abstract: A study of algorithms that provide optimal results in vector generalization is presented. This study is within the CENIT project framework of the España Virtual for research of new cartographic processing algorithms. The generalization is one of the more complex mapping processes, taking its greatest importance when preparing maps derived from other at larger scales. The need for generalization is evident given the impossibility of representing whole real world, taking it to be limited or reduced for the subsequent preparation of the map, keeping main features of the geographical space. Therefore, the goal is to obtain a simplified but representative image of the reality. Due to nearly eighty percent of the mapping vector is composed of linear elements, the research focuses on those algorithms that can process them, proving that its application can also be extended to the treatment of surface elements as they are treated from the closed line that defines them. Moreover, the study focussed into the processes involved within the linear exaggeration intended to highlight or emphasize those features of linear entities that increase the representativeness of our map. These tools, together with others known as the simplification and smoothing of lines, can provide satisfactory results in a process of generalization.
Resumo:
La observación de la Tierra es una herramienta de gran utilidad en la actualidad para el estudio de los fenómenos que se dan en la misma. La observación se puede realizar a distintas escalas y por distintos métodos dependiendo del propósito. El actual Trabajo Final de Grado persigue exponer la observación del territorio mediante técnicas de Teledetección, o Detección Remota, y su aplicación en la exploración de hidrocarburos. Desde la Segunda Guerra Mundial el capturar imágenes aéreas de regiones de la Tierra estaba restringido a usos cartográficos en el sentido estricto. Desde aquellos tiempos, hasta ahora, ha acontecido una serie de avances científicos que permiten deducir características intrínsecas de la Tierra mediante mecanismos complejos que no apreciamos a simple vista, pero que, están configurados mediante determinados parámetros geométricos y electrónicos, que permiten generar series temporales de fenómenos físicos que se dan en la Tierra. Hoy en día se puede afirmar que el aprovechamiento del espectro electromagnético está en un punto máximo. Se ha pasado del análisis de la región del espectro visible al análisis del espectro en su totalidad. Esto supone el desarrollo de nuevos algoritmos, técnicas y procesos para extraer la mayor cantidad de información acerca de la interacción de la materia con la radiación electromagnética. La información que generan los sistemas de captura va a servir para la aplicación directa e indirecta de métodos de prospección de hidrocarburos. Las técnicas utilizadas en detección por sensores remotos, aplicadas en campañas geofísicas, son utilizadas para minimizar costes y maximizar resultados en investigaciones de campo. La predicción de anomalías en la zona de estudio depende del analista, quien diseña, calcula y evalúa las variaciones de la energía electromagnética reflejada o emitida por la superficie terrestre. Para dicha predicción se revisarán distintos programas espaciales, se evaluará la bondad de registro y diferenciación espectral mediante el uso de distintas clasificaciones (supervisadas y no supervisadas). Por su influencia directa sobre las observaciones realizadas, se realiza un estudio de la corrección atmosférica; se programan distintos modelos de corrección atmosférica para imágenes multiespectrales y se evalúan los métodos de corrección atmosférica en datos hiperespectrales. Se obtendrá temperatura de la zona de interés utilizando los sensores TM-4, ASTER y OLI, así como un Modelo Digital del Terreno generado por el par estereoscópico capturado por el sensor ASTER. Una vez aplicados estos procedimientos se aplicarán los métodos directos e indirectos, para la localización de zonas probablemente afectadas por la influencia de hidrocarburos y localización directa de hidrocarburos mediante teledetección hiperespectral. Para el método indirecto se utilizan imágenes capturadas por los sensores ETM+ y ASTER. Para el método directo se usan las imágenes capturadas por el sensor Hyperion. ABSTRACT The observation of the Earth is a wonderful tool for studying the different kind of phenomena that occur on its surface. The observation could be done by different scales and by different techniques depending on the information of interest. This Graduate Thesis is intended to expose the territory observation by remote sensing acquiring data systems and the analysis that can be developed to get information of interest. Since Second World War taking aerials photographs of scene was restricted only to a cartographic sense. From these days to nowadays, it have been developed many scientific advances that make capable the interpretation of the surface behavior trough complex systems that are configure by specific geometric and electronic parameters that make possible acquiring time series of the phenomena that manifest on the earth’s surface. Today it is possible to affirm that the exploitation of the electromagnetic spectrum is on a maxim value. In the past, analysis of the electromagnetic spectrum was carry in a narrow part of it, today it is possible to study entire. This implicates the development of new algorithms, process and techniques for the extraction of information about the interaction of matter with electromagnetic radiation. The information that has been acquired by remote sensing sensors is going to be a helpful tool for the exploration of hydrocarbon through direct and vicarious methods. The techniques applied in remote sensing, especially in geophysical campaigns, are employed to minimize costs and maximize results of ground-based geologic investigations. Forecasting of anomalies in the region of interest depends directly on the expertise data analyst who designs, computes and evaluates variations in the electromagnetic energy reflected or emanated from the earth’s surface. For an optimal prediction a review of the capture system take place; assess of the goodness in data acquisition and spectral separability, is carried out by mean of supervised and unsupervised classifications. Due to the direct influence of the atmosphere in the register data, a study of the minimization of its influence has been done; a script has been programed for the atmospheric correction in multispectral data; also, a review of hyperspectral atmospheric correction is conducted. Temperature of the region of interest is computed using the images captured by TM-4, ASTER and OLI, in addition to a Digital Terrain Model generated by a pair of stereo images taken by ASTER sensor. Once these procedures have finished, direct and vicarious methods are applied in order to find altered zones influenced by hydrocarbons, as well as pinpoint directly hydrocarbon presence by mean of hyperspectral remote sensing. For this purpose ETM+ and ASTER sensors are used to apply the vicarious method and Hyperion images are used to apply the direct method.
Resumo:
Disturbances shape forest ecosystems by influencing their composition, structure, and processes. In the Mediterranean Basin, changes in the disturbance regimes have been predicted to occur in the next future with a higher occurrence of extreme events of drought, wildfire, and – to a lesser extent – windstorm. Woody species are the main elements defining the structure and functioning of forest ecosystems. Recently, response-type diversity has been pointed out as an appropriate indicator of ecosystems resilience. For this, we have elaborated a complete response-trait database for the tree and shrubby species considered in the Third Spanish National Forest Inventory (3SNFI). In the database, the presence or absence of nine response traits associated to drought, fire, and wind were assigned to each species. The database reflected the lack of information about some important traits (in particular for shrubby species) and allowed to determine those traits most widely distributed. The information contained in the database was then used to assess a relative index of forest resilience to these disturbances calculated from the abundance of response traits and the species redundancy for each plot of the 3SNFI; considering both tree and shrubby species. In general, few plots showed high values of the resilience index, probably because some traits were scarcely presented in the species and also because most plots presented very few species. The cartographic representation of the index showed low values for the stands located in mountainous ranges, which are mostly composed by species typical from central Europe. In the other side, Eucalyptus plantations in Galicia appeared as one thee the most resilient ecosystems, due to its higher adaptive capacity to persist after the occurrence of drought, fire, and windstorm events. We conclude that the response traits database can constitute a useful tool for forest management and planning and for future research to enhance the forest resilience.
Resumo:
The multi-dimensional classification problem is a generalisation of the recently-popularised task of multi-label classification, where each data instance is associated with multiple class variables. There has been relatively little research carried out specific to multi-dimensional classification and, although one of the core goals is similar (modelling dependencies among classes), there are important differences; namely a higher number of possible classifications. In this paper we present method for multi-dimensional classification, drawing from the most relevant multi-label research, and combining it with important novel developments. Using a fast method to model the conditional dependence between class variables, we form super-class partitions and use them to build multi-dimensional learners, learning each super-class as an ordinary class, and thus explicitly modelling class dependencies. Additionally, we present a mechanism to deal with the many class values inherent to super-classes, and thus make learning efficient. To investigate the effectiveness of this approach we carry out an empirical evaluation on a range of multi-dimensional datasets, under different evaluation metrics, and in comparison with high-performing existing multi-dimensional approaches from the literature. Analysis of results shows that our approach offers important performance gains over competing methods, while also exhibiting tractable running time.
Resumo:
Large-scale transport infrastructure projects such as high-speed rail (HSR) produce significant effects on the spatial distribution of accessibility. These effects, commonly known as territorial cohesion effects, are receiving increasing attention in the research literature. However, there is little empirical research into the sensitivity of these cohesion results to methodological issues such as the definition of the limits of the study area or the zoning system. In a previous paper (Ortega et al., 2012), we investigated the influence of scale issues, comparing the cohesion results obtained at four different planning levels. This paper makes an additional contribution to our research with the investigation of the influence of zoning issues. We analyze the extent to which changes in the size of the units of analysis influence the measurement of spatial inequalities. The methodology is tested by application to the Galician (north-western) HSR corridor, with a length of nearly 670 km, included in the Spanish PEIT (Strategic Transport and Infrastructure Plan) 2005-2020. We calculated the accessibility indicators for the Galician HSR corridor and assessed their corresponding territorial distribution. We used five alternative zoning systems depending on the method of data representation used (vector or raster), and the level of detail (cartographic accuracy or cell size). Our results suggest that the choice between a vector-based and raster-based system has important implications. The vector system produces a higher mean accessibility value and a more polarized accessibility distribution than raster systems. The increased pixel size of raster-based systems tends to give rise to higher mean accessibility values and a more balanced accessibility distribution. Our findings strongly encourage spatial analysts to acknowledge that the results of their analyses may vary widely according to the definition of the units of analysis.