970 resultados para Information sciences
Resumo:
Current research in the domain of geographic information science considers possibilities of including another dimension, time, which is generally missing to this point. Users interested in changes have few functions available to compare datasets of spatial configurations at different points in time. Such a comparison of spatial configurations requires large amounts of manual labor. An automatic derivation of changes would decrease amounts of manual labor. The thesis introduces a set of methods that allows for an automatic derivation of changes. These methods analyze identity and topological states of objects in snapshots and derive types of change for the specific configuration of data. The set of change types that can be computed by the methods presented includes continuous changes such as growing, shrinking, and moving of objects. For these continuous changes identity remains unchanged, while topological relations might be altered over time. Also discrete changes such as merging and splitting where both identity and topology are affected can be derived. Evaluation of the methods using a prototype application with simple examples suggests that the methods compute uniquely and correctly the type of change that applied in spatial scenarios captured in two snapshots.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
Resumo:
En las Ciencias de la Información, existen dos problemas cuya relevancia se aumenta con la irrupción masiva de las nuevas tecnologías de la información: el silencio y el ruido. El objetivo del presente trabajo es demostrar que la solución parcial que brindan las nuevas tecnologías de la información a estos problemas, tienen como base la semiótica en su triple dimensión: sintáctica, semántica y pragmática. En la sintaxis, a través de las condiciones constitutivas de los lenguajes o los códigos; en la semántica, por la adecuación o no del mensaje a la realidad y por las condiciones de posibilidad para que ello ocurra; y en la pragmática, atendiendo a la relevancia del mensaje para inducir a la toma de decisiones o a la acción de un usuario o una comunidad de estos.
Resumo:
Se propone un análisis de la cientificidad y las pretensiones de conocimiento de las ciencias de la información desde la perspectiva de Richard Rorty. Se presenta la crítica rortyana a la filosofía epistemológicamente centrada, haciendo énfasis en la presunción de que el conocimiento tiene algún tipo de fundamento último, y la presunción de que la filosofía es la encargada de esclarecer este fundamento. Se introduce la propuesta del conductismo epistemológico de Rorty como alternativa al fundacionismo de la filosofía epistemológicamente centrada. Posteriormente se realiza una revisión de diferentes posturas sobre el status científico y la legitimidad epistémica de las ciencias de la información. Se agrupan las posturas revisadas en torno a dos estrategias de fundamentación: la adscripción de las ciencias de la información a algún modelo de cientificidad, o su vinculación con un sistema filosófico general. Se afirma que ambas estrategias presuponen una perspectiva fundacionista, en cuanto buscan ofrecer una fundamentación disciplinar en base a un marco filosófico exterior. A esto se opone el conductismo epistemológico rortyano, según el cual la autoridad epistémica debe explicarse a partir de los contextos específicos de cada comunidad, y no a partir de un fundamento epistémico externo
Resumo:
Se propone un análisis de la cientificidad y las pretensiones de conocimiento de las ciencias de la información desde la perspectiva de Richard Rorty. Se presenta la crítica rortyana a la filosofía epistemológicamente centrada, haciendo énfasis en la presunción de que el conocimiento tiene algún tipo de fundamento último, y la presunción de que la filosofía es la encargada de esclarecer este fundamento. Se introduce la propuesta del conductismo epistemológico de Rorty como alternativa al fundacionismo de la filosofía epistemológicamente centrada. Posteriormente se realiza una revisión de diferentes posturas sobre el status científico y la legitimidad epistémica de las ciencias de la información. Se agrupan las posturas revisadas en torno a dos estrategias de fundamentación: la adscripción de las ciencias de la información a algún modelo de cientificidad, o su vinculación con un sistema filosófico general. Se afirma que ambas estrategias presuponen una perspectiva fundacionista, en cuanto buscan ofrecer una fundamentación disciplinar en base a un marco filosófico exterior. A esto se opone el conductismo epistemológico rortyano, según el cual la autoridad epistémica debe explicarse a partir de los contextos específicos de cada comunidad, y no a partir de un fundamento epistémico externo
Resumo:
Se propone un análisis de la cientificidad y las pretensiones de conocimiento de las ciencias de la información desde la perspectiva de Richard Rorty. Se presenta la crítica rortyana a la filosofía epistemológicamente centrada, haciendo énfasis en la presunción de que el conocimiento tiene algún tipo de fundamento último, y la presunción de que la filosofía es la encargada de esclarecer este fundamento. Se introduce la propuesta del conductismo epistemológico de Rorty como alternativa al fundacionismo de la filosofía epistemológicamente centrada. Posteriormente se realiza una revisión de diferentes posturas sobre el status científico y la legitimidad epistémica de las ciencias de la información. Se agrupan las posturas revisadas en torno a dos estrategias de fundamentación: la adscripción de las ciencias de la información a algún modelo de cientificidad, o su vinculación con un sistema filosófico general. Se afirma que ambas estrategias presuponen una perspectiva fundacionista, en cuanto buscan ofrecer una fundamentación disciplinar en base a un marco filosófico exterior. A esto se opone el conductismo epistemológico rortyano, según el cual la autoridad epistémica debe explicarse a partir de los contextos específicos de cada comunidad, y no a partir de un fundamento epistémico externo
Resumo:
With full-waveform (FWF) lidar systems becoming increasingly available from different commercial manufacturers, the possibility for extracting physical parameters of the scanned surfaces in an area-wide sense, as addendum to their geometric representation, has risen as well. The mentioned FWF systems digitize the temporal profiles of the transmitted laser pulse and of its backscattered echoes, allowing for a reliable determination of the target distance to the instrument and of physical target quantities by means of radiometric calibration, one of such quantities being the diffuse Lambertian reflectance. The delineation of glaciers is a time-consuming task, commonly performed manually by experts and involving field trips as well as image interpretation of orthophotos, digital terrain models and shaded reliefs. In this study, the diffuse Lambertian reflectance was compared to the glacier outlines mapped by experts. We start the presentation with the workflow for analysis of FWF data, their direct georeferencing and the calculation of the diffuse Lambertian reflectance by radiometric calibration; this workflow is illustrated for a large FWF lidar campaign in the Ötztal Alps (Tyrol, Austria), operated with an Optech ALTM 3100 system. The geometric performance of the presented procedure was evaluated by means of a relative and an absolute accuracy assessment using strip differences and orthophotos, resp. The diffuse Lambertian reflectance was evaluated at two rock glaciers within the mentioned lidar campaign. This feature showed good performance for the delineation of the rock glacier boundaries, especially at their lower parts.
Resumo:
Mass spectrometry (MS) data provide a promising strategy for biomarker discovery. For this purpose, the detection of relevant peakbins in MS data is currently under intense research. Data from mass spectrometry are challenging to analyze because of their high dimensionality and the generally low number of samples available. To tackle this problem, the scientific community is becoming increasingly interested in applying feature subset selection techniques based on specialized machine learning algorithms. In this paper, we present a performance comparison of some metaheuristics: best first (BF), genetic algorithm (GA), scatter search (SS) and variable neighborhood search (VNS). Up to now, all the algorithms, except for GA, have been first applied to detect relevant peakbins in MS data. All these metaheuristic searches are embedded in two different filter and wrapper schemes coupled with Naive Bayes and SVM classifiers.
Resumo:
In this paper we introduce the idea of using a reliability measure associated to the predic- tions made by recommender systems based on collaborative filtering. This reliability mea- sure is based on the usual notion that the more reliable a prediction, the less liable to be wrong. Here we will define a general reliability measure suitable for any arbitrary recom- mender system. We will also show a method for obtaining specific reliability measures specially fitting the needs of different specific recommender systems.
Resumo:
Ubiquitous computing software needs to be autonomous so that essential decisions such as how to configure its particular execution are self-determined. Moreover, data mining serves an important role for ubiquitous computing by providing intelligence to several types of ubiquitous computing applications. Thus, automating ubiquitous data mining is also crucial. We focus on the problem of automatically configuring the execution of a ubiquitous data mining algorithm. In our solution, we generate configuration decisions in a resource aware and context aware manner since the algorithm executes in an environment in which the context often changes and computing resources are often severely limited. We propose to analyze the execution behavior of the data mining algorithm by mining its past executions. By doing so, we discover the effects of resource and context states as well as parameter settings on the data mining quality. We argue that a classification model is appropriate for predicting the behavior of an algorithm?s execution and we concentrate on decision tree classifier. We also define taxonomy on data mining quality so that tradeoff between prediction accuracy and classification specificity of each behavior model that classifies by a different abstraction of quality, is scored for model selection. Behavior model constituents and class label transformations are formally defined and experimental validation of the proposed approach is also performed.
Resumo:
Cultural heritage is a complex and diverse concept, which brings together a wide domain of information. Resources linked to a cultural heritage site may consist of physical artefacts, books, works of art, pictures, historical maps, aerial photographs, archaeological surveys and 3D models. Moreover, all these resources are listed and described by a set of a variety of metadata specifications that allow their online search and consultation on the most basic characteristics of them. Some examples include Norma ISO 19115, Dublin Core, AAT, CDWA, CCO, DACS, MARC, MoReq, MODS, MuseumDat, TGN, SPECTRUM, VRA Core and Z39.50. Gateways are in place to fit in these metadata standards into those used in a SDI (ISO 19115 or INSPIRE), but substantial work still remains to be done for the complete incorporation of cultural heritage information. Therefore, the aim of this paper is to demonstrate how the complexity of cultural heritage resources can be dealt with by a visual exploration of their metadata within a 3D collaborative environment. The 3D collaborative environments are promising tools that represent the new frontier of our capacity of learning, understanding, communicating and transmitting culture.