939 resultados para Open Information Extraction
Resumo:
The history of community television shows that it has been a home to activist and non-profit organizations that have created programs focused on freedom of speech. This project proposes that community television is also a place where artists can have freedom of artistic expression. The reflective paper reviews the creation of my film designed to inform and attract artists to community television. In it I critically reflect on the artistic, technical, artistic/technical, and production changes made throughout my journey from being a visual artist to becoming a video-artist. The reflective paper, along with the film, act as a wake-up call to artists who are unaware of community television and the advantages it has to offer them.
Resumo:
Nowadays there is a big amount of biomedical literature which uses complex nouns and acronyms of biological entities thus complicating the task of retrieval specific information. The Genomics Track works for this goal and this paper describes the approach we used to take part of this track of TREC 2007. As this is the first time we participate in this track, we configurated a new system consisting of the following diferenciated parts: preprocessing, passage generation, document retrieval and passage (with the answer) extraction. We want to call special attention to the textual retrieval system used, which was developed by the University of Alicante. Adapting the resources for the propouse, our system has obtained precision results over the mean and median average of the 66 official runs for the Document, Aspect and Passage2 MAP; and in the case of Passage MAP we get nearly the median and mean value. We want to emphasize we have obtained these results without incorporating specific information about the domain of the track. For the future, we would like to further develop our system in this direction.
Open business intelligence: on the importance of data quality awareness in user-friendly data mining
Resumo:
Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.
Resumo:
El campo de procesamiento de lenguaje natural (PLN), ha tenido un gran crecimiento en los últimos años; sus áreas de investigación incluyen: recuperación y extracción de información, minería de datos, traducción automática, sistemas de búsquedas de respuestas, generación de resúmenes automáticos, análisis de sentimientos, entre otras. En este artículo se presentan conceptos y algunas herramientas con el fin de contribuir al entendimiento del procesamiento de texto con técnicas de PLN, con el propósito de extraer información relevante que pueda ser usada en un gran rango de aplicaciones. Se pueden desarrollar clasificadores automáticos que permitan categorizar documentos y recomendar etiquetas; estos clasificadores deben ser independientes de la plataforma, fácilmente personalizables para poder ser integrados en diferentes proyectos y que sean capaces de aprender a partir de ejemplos. En el presente artículo se introducen estos algoritmos de clasificación, se analizan algunas herramientas de código abierto disponibles actualmente para llevar a cabo estas tareas y se comparan diversas implementaciones utilizando la métrica F en la evaluación de los clasificadores.
Resumo:
We present UBV photometry of the highly reddened and poorly studied open cluster Berkeley 55, revealing an important population of B-type stars and several evolved stars of high luminosity. Intermediate-resolution far-red spectra of several candidate members confirm the presence of one F-type supergiant and six late supergiants or bright giants. The brightest blue stars are mid-B giants. Spectroscopic and photometric analyses indicate an age 50 ± 10 Myr. The cluster is located at a distance d ≈ 4 kpc, consistent with other tracers of the Perseus Arm in this direction. Berkeley 55 is thus a moderately young open cluster with a sizable population of candidate red (super)giant members, which can provide valuable information about the evolution of intermediate-mass stars.
Resumo:
Paper submitted to the 39th International Symposium on Robotics ISR 2008, Seoul, South Korea, October 15-17, 2008.
Resumo:
A wealth of open educational resources (OER) focused on green topics is currently available through a variety of sources, including learning portals, digital repositories and web sites. However, in most cases these resources are not easily accessible and retrievable, while additional issues further complicate this issue. This paper presents an overview of a number of portals hosting OER, as well as a number of “green” thematic portals that provide access to green OER. It also discusses the case of a new collection that aims to support and populate existing green collections and learning portals respectively, providing information on aspects such as quality assurance/collection and curation policies, workflow and tools for both the content and metadata records that apply to the collection. Two case studies of the integration of this new collection to existing learning portals are also presented.
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.
Resumo:
This article shows the research carried out by the authors focused on how the shape of structural reinforced concrete elements treated with electrochemical chloride extraction can affect the efficiency of this process. Assuming the current use of different anode systems, the present study considers the comparison of results between conventional anodes based on Ti-RuO2 wire mesh and a cement-based anodic system such as a paste of graphite-cement. Reinforced concrete elements of a meter length were molded to serve as laboratory specimens, to closely represent authentic structural supports, with circular and rectangular sections. Results confirm almost equal performances for both types of anode systems when electrochemical chloride extraction is applied to isotropic structural elements. In the case of anisotropic ones, such as rectangular sections with no uniformly distributed rebar, differences in electrical flow density were detected during the treatment. Those differences were more extreme for Ti-RuO2 mesh anode system. This particular shape effect is evidenced by obtaining the efficiencies of electrochemical chloride extraction in different points of specimens.
New Approaches for Teaching Soil and Rock Mechanics Using Information and Communication Technologies
Resumo:
Soil and rock mechanics are disciplines with a strong conceptual and methodological basis. Initially, when engineering students study these subjects, they have to understand new theoretical phenomena, which are explained through mathematical and/or physical laws (e.g. consolidation process, water flow through a porous media). In addition to the study of these phenomena, students have to learn how to carry out estimations of soil and rock parameters in laboratories according to standard tests. Nowadays, information and communication technologies (ICTs) provide a unique opportunity to improve the learning process of students studying the aforementioned subjects. In this paper, we describe our experience of the incorporation of ICTs into the classical teaching-learning process of soil and rock mechanics and explain in detail how we have successfully developed various initiatives which, in summary, are: (a) implementation of an online social networking and microblogging service (using Twitter) for gradually sending key concepts to students throughout the semester (gradual learning); (b) detailed online virtual laboratory tests for a delocalized development of lab practices (self-learning); (c) integration of different complementary learning resources (e.g. videos, free software, technical regulations, etc.) using an open webpage. The complementary use to the classical teaching-learning process of these ICT resources has been highly satisfactory for students, who have positively evaluated this new approach.
Resumo:
Camera traps have become a widely used technique for conducting biological inventories, generating a large number of database records of great interest. The main aim of this paper is to describe a new free and open source software (FOSS), developed to facilitate the management of camera-trapped data which originated from a protected Mediterranean area (SE Spain). In the last decade, some other useful alternatives have been proposed, but ours focuses especially on a collaborative undertaking and on the importance of spatial information underpinning common camera trap studies. This FOSS application, namely, “Camera Trap Manager” (CTM), has been designed to expedite the processing of pictures on the .NET platform. CTM has a very intuitive user interface, automatic extraction of some image metadata (date, time, moon phase, location, temperature, atmospheric pressure, among others), analytical (Geographical Information Systems, statistics, charts, among others), and reporting capabilities (ESRI Shapefiles, Microsoft Excel Spreadsheets, PDF reports, among others). Using this application, we have achieved a very simple management, fast analysis, and a significant reduction of costs. While we were able to classify an average of 55 pictures per hour manually, CTM has made it possible to process over 1000 photographs per hour, consequently retrieving a greater amount of data.
Resumo:
Beijing is one of the most water-stressed cities in the world. Due to over-exploitation of groundwater, the Beijing region has been suffering from land subsidence since 1935. In this study, the Small Baseline InSAR technique has been employed to process Envisat ASAR images acquired between 2003 and 2010 and TerraSAR-X stripmap images collected from 2010 to 2011 to investigate land subsidence in the Beijing region. The maximum subsidence is seen in the eastern part of Beijing with a rate greater than 100 mm/year. Comparisons between InSAR and GPS derived subsidence rates show an RMS difference of 2.94 mm/year with a mean of 2.41 ± 1.84 mm/year. In addition, a high correlation was observed between InSAR subsidence rate maps derived from two different datasets (i.e., Envisat and TerraSAR-X). These demonstrate once again that InSAR is a powerful tool for monitoring land subsidence. InSAR derived subsidence rate maps have allowed for a comprehensive spatio-temporal analysis to identify the main triggering factors of land subsidence. Some interesting relationships in terms of land subsidence were found with groundwater level, active faults, accumulated soft soil thickness and different aquifer types. Furthermore, a relationship with the distances to pumping wells was also recognized in this work.
Resumo:
This layer is a georeferenced raster image of the historic paper map entitled: OKI regional land use : 1975. It was published by OKI Regional Planning Authority in 1975. Scale [ca. 1:5,000]. Covers Cincinnati Region, Ohio including Butler, Clermont, Hamilton, Warren counties, Ohio; Boone, Campbell, and Kenton counties, Kentucky; and Dearborn and Ohio counties, Indiana. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Ohio South State Plane NAD 1983 coordinate system (in Feet) (Fipszone 3402). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map is colored to show land use categories: Urban residential ; Suburban residential ; Commercial ; Institutional/Service ; Utilities ; Industrial ; Resource extraction ; Recreational/Open space ; Cropland ; Grassland ; Woodland ; Water. It also shows features as major roads, drainage, administrative and political boundaries, and more. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.