811 resultados para Database, Image Retrieval, Browsing, Semantic Concept
Resumo:
El objeto de este trabajo ha sido estudiar el motivo de la apoteosis con el fin de descubrir de qué manera se integra positivamente en la representación de las Metamorfosis. Se intenta trascender interpretaciones denigratorias o irónicas del motivo. El estudio consta de dos partes. En el primer capítulo de la primera parte, "La apoteosis en la estructura de las Metamorfosis", se demuestra con un análisis intratextual la importancia que tiene el relato de la creación del hombre como sanctius animal, considerada paradigmática, en el estudio del motivo de la apoteosis. El segundo capítulo insiste en la dimensión intertextual, recordando la influencia estructural de dos modelos poéticos de las Metamorfosis de Ovidio, Hesíodo y la Égloga 6 de Virgilio, para una representación que comienza en la creación del hombre y termina en la apoteosis del poeta. La introducción a la segunda parte, "Valor semántico de la apoteosis", pone el acento en tres aspectos esenciales para el estudio de la apoteosis. En primer lugar, la idea de "lectura", i.e., la trascendencia latente del motivo de la apoteosis y de la idea de estirpe a lo largo de la obra. Se propone, en segundo lugar, el concepto de "mitologización" como una forma de examinar la apropiación que hace el poeta del mito historizado y del mito con alusión augustea. Esa apropiación tendrá como fin destacar el aspecto literatio de las historias y ejercerá un papel fundamental en la construcción de una gfigura de poeta que, mediante la apoteosis por su obra, se eleva por sobre las figuras de filósofo y de político. Su tratamiento en este trabajo, pues, se debe al hecho de que está ligado, en último término, con el motivo de la apoteosis. Dado que el estudio tiene como punto de partida la relación entre el sanctius animal, paradigma de la creación del hombre, y la apoteosis, se ha elegido examinar detalladamente aquellas apoteosis en que aparece una alusión explícita a la parte mortal o a la parte inmortal, i.e., las de Ino y Melicertes, Hércules, Eneas, Glauco, Rómulo, Hersilia, César, Augusto y Ovidio. Se incluyen, de todas maneras, otras referencias y apoteosis relevantes. La alusión a la imagen de la pars, siguiendo una de las denominaciones con más trascendencia formal en la obra, remite a la creación del hombre divino semine y a partir de semina caeli, aspecto que se estudia en el primer capítulo de la primera parte y se retoma en el análisis de cada apoteosis. La conclusión del trabajo es que el motivo de la apoteosis sólo puede inteligirse a la luz del final, en el que se representa la apoteosis del poeta. Al margen de la mitologización que opera en las historias con valor histórico y augusteo, Ovidio se ha preocupado por mantener, aunque sólo sea en forma referencial, el sentido de las apoteosis con el fin de desplegar toda su significación en la apoteosis final. Inversamente, el valor completo del motivo en el cierre de la obra exige aceptar su significación a lo largo de la obra. En el final, el poeta, un poeta Romanus en la medida en que su obra será leída donde se extienda la Romana potentia, se revela como el auténtico sanctius animal referido en la creación.
Resumo:
The Wadden Sea is located in the southeastern part of the North Sea forming an extended intertidal area along the Dutch, German and Danish coast. It is a highly dynamic and largely natural ecosystem influenced by climatic changes and anthropogenic use of the North Sea. Changes in the environment of the Wadden Sea, natural or anthropogenic origin, cannot be monitored by the standard measurement methods alone, because large-area surveys of the intertidal flats are often difficult due to tides, tidal channels and unstable underground. For this reason, remote sensing offers effective monitoring tools. In this study a multi-sensor concept for classification of intertidal areas in the Wadden Sea has been developed. Basis for this method is a combined analysis of RapidEye (RE) and TerraSAR-X (TSX) satellite data coupled with ancillary vector data about the distribution of vegetation, mussel beds and sediments. The classification of the vegetation and mussel beds is based on a decision tree and a set of hierarchically structured algorithms which use object and texture features. The sediments are classified by an algorithm which uses thresholds and a majority filter. Further improvements focus on radiometric enhancement and atmospheric correction. First results show that we are able to identify vegetation and mussel beds with the use of multi-sensor remote sensing. The classification of the sediments in the tidal flats is a challenge compared to vegetation and mussel beds. The results demonstrate that the sediments cannot be classified with high accuracy by their spectral properties alone due to their similarity which is predominately caused by their water content.
Resumo:
A new topographic database for King George Island, one of the most visited areas in Antarctica, is presented. Data from differential GPS surveys, gained during the summers 1997/98 and 1999/2000, were combined with up to date coastlines from a SPOT satellite image mosaic, and topographic information from maps as well as from the Antarctic Digital Database. A digital terrain model (DTM) was generated using ARC/INFO GIS. From contour lines derived from the DTM and the satellite image mosaic a satellite image map was assembled. Extensive information on data accuracy, the database as well as on the criteria applied to select place names is given in the multilingual map. A lack of accurate topographic information in the eastern part of the island was identified. It was concluded that additional topographic surveying or radar interferometry should be conducted to improve the data quality in this area. In three case studies, the potential applications of the improved topographic database are demonstrated. The first two examples comprise the verification of glacier velocities and the study of glacier retreat from the various input data-sets as well as the use of the DTM for climatological modelling. The last case study focuses on the use of the new digital database as a basic GIS (Geographic Information System) layer for environmental monitoring and management on King George Island.
Resumo:
At present time, there is a lack of knowledge on the interannual climate-related variability of zooplankton communities of the tropical Atlantic, central Mediterranean Sea, Caspian Sea, and Aral Sea, due to the absence of appropriate databases. In the mid latitudes, the North Atlantic Oscillation (NAO) is the dominant mode of atmospheric fluctuations over eastern North America, the northern Atlantic Ocean and Europe. Therefore, one of the issues that need to be addressed through data synthesis is the evaluation of interannual patterns in species abundance and species diversity over these regions in regard to the NAO. The database has been used to investigate the ecological role of the NAO in interannual variations of mesozooplankton abundance and biomass along the zonal array of the NAO influence. Basic approach to the proposed research involved: (1) development of co-operation between experts and data holders in Ukraine, Russia, Kazakhstan, Azerbaijan, UK, and USA to rescue and compile the oceanographic data sets and release them on CD-ROM, (2) organization and compilation of a database based on FSU cruises to the above regions, (3) analysis of the basin-scale interannual variability of the zooplankton species abundance, biomass, and species diversity.
Resumo:
To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: (i) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. (ii) The inclusion probabilities must be: (a) knowable for nonsampled units and (b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne Very High Resolution (VHR) images, where: (I) an original Categorical Variable Pair Similarity Index (CVPSI, proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and (II) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic MapperT (SIAMT) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAMT by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAMT pre-classification maps proposed in this contribution, together with OQIs claimed for SIAMT by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAMT software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems (GEOSS) initiative and the QA4EO international guidelines.
Resumo:
Vast portions of Arctic and sub-Arctic Siberia, Alaska and the Yukon Territory are covered by ice-rich silty to sandy deposits that are containing large ice wedges, resulting from syngenetic sedimentation and freezing. Accompanied by wedge-ice growth in polygonal landscapes, the sedimentation process was driven by cold continental climatic and environmental conditions in unglaciated regions during the late Pleistocene, inducing the accumulation of the unique Yedoma deposits up to >50 meters thick. Because of fast incorporation of organic material into syngenetic permafrost during its formation, Yedoma deposits include well-preserved organic matter. Ice-rich deposits like Yedoma are especially prone to degradation triggered by climate changes or human activity. When Yedoma deposits degrade, large amounts of sequestered organic carbon as well as other nutrients are released and become part of active biogeochemical cycling. This could be of global significance for future climate warming as increased permafrost thaw is likely to lead to a positive feedback through enhanced greenhouse gas fluxes. Therefore, a detailed assessment of the current Yedoma deposit coverage and its volume is of importance to estimate its potential response to future climate changes. We synthesized the map of the coverage and thickness estimation, which will provide critical data needed for further research. In particular, this preliminary Yedoma map is a great step forward to understand the spatial heterogeneity of Yedoma deposits and its regional coverage. There will be further applications in the context of reconstructing paleo-environmental dynamics and past ecosystems like the mammoth-steppe-tundra, or ground ice distribution including future thermokarst vulnerability. Moreover, the map will be a crucial improvement of the data basis needed to refine the present-day Yedoma permafrost organic carbon inventory, which is assumed to be between 83±12 (Strauss et al., 2013, doi:10.1002/2013GL058088) and 129±30 (Walter Anthony et al., 2014, doi:10.1038/nature13560) gigatonnes (Gt) of organic carbon in perennially-frozen archives. Hence, here we synthesize data on the circum-Arctic and sub-Arctic distribution and thickness of Yedoma for compiling a preliminary circum-polar Yedoma map. For compiling this map, we used (1) maps of the previous Yedoma coverage estimates, (2) included the digitized areas from Grosse et al. (2013) as well as extracted areas of potential Yedoma distribution from additional surface geological and Quaternary geological maps (1.: 1:500,000: Q-51-V,G; P-51-A,B; P-52-A,B; Q-52-V,G; P-52-V,G; Q-51-A,B; R-51-V,G; R-52-V,G; R-52-A,B; 2.: 1:1,000,000: P-50-51; P-52-53; P-58-59; Q-42-43; Q-44-45; Q-50-51; Q-52-53; Q-54-55; Q-56-57; Q-58-59; Q-60-1; R-(40)-42; R-43-(45); R-(45)-47; R-48-(50); R-51; R-53-(55); R-(55)-57; R-58-(60); S-44-46; S-47-49; S-50-52; S-53-55; 3.: 1:2,500,000: Quaternary map of the territory of Russian Federation, 4.: Alaska Permafrost Map). The digitalization was done using GIS techniques (ArcGIS) and vectorization of raster Images (Adobe Photoshop and Illustrator). Data on Yedoma thickness are obtained from boreholes and exposures reported in the scientific literature. The map and database are still preliminary and will have to undergo a technical and scientific vetting and review process. In their current form, we included a range of attributes for Yedoma area polygons based on lithological and stratigraphical information from the original source maps as well as a confidence level for our classification of an area as Yedoma (3 stages: confirmed, likely, or uncertain). In its current version, our database includes more than 365 boreholes and exposures and more than 2000 digitized Yedoma areas. We expect that the database will continue to grow. In this preliminary stage, we estimate the Northern Hemisphere Yedoma deposit area to cover approximately 625,000 km². We estimate that 53% of the total Yedoma area today is located in the tundra zone, 47% in the taiga zone. Separated from west to east, 29% of the Yedoma area is found in North America and 71 % in North Asia. The latter include 9% in West Siberia, 11% in Central Siberia, 44% in East Siberia and 7% in Far East Russia. Adding the recent maximum Yedoma region (including all Yedoma uplands, thermokarst lakes and basins, and river valleys) of 1.4 million km² (Strauss et al., 2013, doi:10.1002/2013GL058088) and postulating that Yedoma occupied up to 80% of the adjacent formerly exposed and now flooded Beringia shelves (1.9 million km², down to 125 m below modern sea level, between 105°E - 128°W and >68°N), we assume that the Last Glacial Maximum Yedoma region likely covered more than 3 million km² of Beringia. Acknowledgements: This project is part of the Action Group "The Yedoma Region: A Synthesis of Circum-Arctic Distribution and Thickness" (funded by the International Permafrost Association (IPA) to J. Strauss) and is embedded into the Permafrost Carbon Network (working group Yedoma Carbon Stocks). We acknowledge the support by the European Research Council (Starting Grant #338335), the German Federal Ministry of Education and Research (Grant 01DM12011 and "CarboPerm" (03G0836A)), the Initiative and Networking Fund of the Helmholtz Association (#ERC-0013) and the German Federal Environment Agency (UBA, project UFOPLAN FKZ 3712 41 106).
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
This paper describes an infrastructure for the automated evaluation of semantic technologies and, in particular, semantic search technologies. For this purpose, we present an evaluation framework which follows a service-oriented approach for evaluating semantic technologies and uses the Business Process Execution Language (BPEL) to define evaluation workflows that can be executed by process engines. This framework supports a variety of evaluations, from different semantic areas, including search, and is extendible to new evaluations. We show how BPEL addresses this diversity as well as how it is used to solve specific challenges such as heterogeneity, error handling and reuse
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
The concept of Project encompasses a semantic disparity that involves all areas of professional and nonprofessional activity. In the engineering projects domain, and starting by the etymological roots of the terms, a review of the definitions given by different authors and their relation with sociological trends of the last decades is carried out. The engineering projects began as a tool for the development of technological ideas and have been improved with legal, economic and management parameters and recently with environmental aspects. However, the engineering projects involve people, groups, agents, organizations, companies and institutions. Nowadays, the social implications of projects are taken into consideration but the technology for social integration is not consolidated. This communication provides a new framework based on the experience for the development of engineering projects in the context of "human development", placing people in the center of the project
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
This paper presents a study on the effect of blurred images in hand biometrics. Blurred images simulates out-of-focus effects in hand image acquisition, a common consequence of unconstrained, contact-less and platform-free hand biometrics in mobile devices. The proposed biometric system presents a hand image segmentation based on multiscale aggregation, a segmentation method invariant to different changes like noise or blurriness, together with an innovative feature extraction and a template creation, oriented to obtain an invariant performance against blurring effects. The results highlight that the proposed system is invariant to some low degrees of blurriness, requiring an image quality control to detect and correct those images with a high degree of blurriness. The evaluation has considered a synthetic database created based on a publicly available database with 120 individuals. In addition, several biometric techniques could benefit from the approach proposed in this paper, since blurriness is a very common effect in biometric techniques involving image acquisition.