939 resultados para Open Information Extraction
Resumo:
El objetivo del proyecto Open Geogadget Framework es el desarrollo de una plataforma Software que posibilite la composición y compartición de Interfaces para obtener conocimiento de Servicios Web de Información Geográfica estándares, mediante componentes tipo gadget's inteligentes, de una manera sencilla y rápida para el usuario; permitiendo explotar geo-información, sin necesidad de programar complicadas aplicaciones Web tipo mash-up
Resumo:
Presentación que muestra una aplicación GIS propietaria de hace 15 años, junto con su nueva cara gracias al software libre. Se cuenta una arquitectura novedosa que incluye el uso del API de MapFish Server para lanzar geo-procesos, junto con el software stack de PostGIS, GeoWebCache y OpenLayers
Resumo:
Presentació sobre la creació de la placa Arduino, i els projectes que s'han fet arreu amb aquesta placa
Resumo:
La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura. Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional. La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen. Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen. La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.
Resumo:
The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
There is a growing body of information on sex differences In callitrichid behaviour that includes the animals' performance in food tasks. For example, both reproductive and non-reproductive adult females have been found to be more successful than adult males in solving food tasks. In this study ten adult male and ten adult female common marmosets (Callithrix jacchus), housed individually, were tested with an unfamiliar task that involved the extraction of an embedded food. The task was to open a plastic canister that contained a raisin; the open end was covered with parchment paper. Each marmoset was given 15 trials in three blocks of 5 consecutive days. We measured the latency for each animal to open the lid and get the raisin-by one of five strategies that spontaneously emerged. The females learned the task faster and more efficiently than males; all the females opened the canister on day 1, for instance, in contrast to seven of the males on the same day. Females also progressively decreased the time that they took to open the tube. The final latency on day 15, for instance, was significantly shorter for the females. These results are consistent with relevant literature for callitrichids and cannot be accounted for in terms of differences in mental abilities, strength, hand morphology, or energy requirements. Further investigation is necessary to clarify the reasons for these differences.
Resumo:
Economic mechanisms enhance technological solutions by setting the right incentives to reveal information about demand and supply accurately. Market or pricing mechanisms are ones that foster information exchange and can therefore attain efficient allocation. By assigning a value (also called utility) to their service requests, users can reveal their relative urgency or costs to the service. The implementation of theoretical sound models induce further complex challenges. The EU-funded project SORMA analyzes these challenges and provides a prototype as a proof-of-concept. In this paper the approach within the SORMA-project is described on both conceptual and technical level.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
The high variability of the intensity of suprathermal electron flux in the solar wind is usually ascribed to the high variability of sources on the Sun. Here we demonstrate that a substantial amount of the variability arises from peaks in stream interaction regions, where fast wind runs into slow wind and creates a pressure ridge at the interface. Superposed epoch analysis centered on stream interfaces in 26 interaction regions previously identified in Wind data reveal a twofold increase in 250 eV flux (integrated over pitch angle). Whether the peaks result from the compression there or are solar signatures of the coronal hole boundary, to which interfaces may map, is an open question. Suggestive of the latter, some cases show a displacement between the electron and magnetic field peaks at the interface. Since solar information is transmitted to 1 AU much more quickly by suprathermal electrons compared to convected plasma signatures, the displacement may imply a shift in the coronal hole boundary through transport of open magnetic flux via interchange reconnection. If so, however, the fact that displacements occur in both directions and that the electron and field peaks in the superposed epoch analysis are nearly coincident indicate that any systematic transport expected from differential solar rotation is overwhelmed by a random pattern, possibly owing to transport across a ragged coronal hole boundary.
Resumo:
Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.