950 resultados para Online services using open-source NLP tools
Resumo:
La Junta de Andalucía dentro de su proyecto de Sistema de Información Geográfico Corporativo (SIGC), ha desarrollado un módulo de Callejero Digital de Andalucía (CDA) que recoge la información de direcciones de los 770 municipios de Andalucía. El Callejero Digital de Andalucía se articula en torno a los datos espaciales, una aplicación web de consulta, un motor de búsqueda (geocoder) y una serie de servicios OGC y SOAP. Todas los desarrollos están basados en software libre y pretenden convertirse en la herramienta corporativa para establecer la geoinformación asociada a registros y direcciones postales de la Junta de Andalucía. (...)
Resumo:
Free online training resources on using web 2.0 tools for busy lecturers. - 'Outstanding ICT initiative of the year' winner of the JISC award is commended for 'commitment to open access to online content' A wealth of openly available multimedia content won the JISC/Times Higher Award. Created by University of Westminster lecturer Russell Stannard's websites build upon pioneering work using video to mark students' work. Using screen recording software, Stannard recorded himself walking through various Web 2.0 technologies with a voice-over, which were then uploaded to a website - www.teachertrainingvideos.com. The site quickly proved popular and rapidly built into a bank of over 30 videos.
Resumo:
Primera conferencia. Bibliotecas y Repositorios Digitales: Gestión del Conocimiento, Acceso Abierto y Visibilidad Latinoamericana. (BIREDIAL) Mayo 9 al 11 de 2011. Bogotá, Colombia.
Resumo:
Abstract: There is a lot of hype around the Internet of Things along with talk about 100 billion devices within 10 years time. The promise of innovative new services and efficiency savings is fueling interest in a wide range of potential applications across many sectors including smart homes, healthcare, smart grids, smart cities, retail, and smart industry. However, the current reality is one of fragmentation and data silos. W3C is seeking to fix that by exposing IoT platforms through the Web with shared semantics and data formats as the basis for interoperability. This talk will address the abstractions needed to move from a Web of pages to a Web of things, and introduce the work that is being done on standards and on open source projects for a new breed of Web servers on microcontrollers to cloud based server farms. Speaker Biography -Dave Raggett : Dave has been involved at the heart of web standards since 1992, and part of the W3C Team since 1995. As well as working on standards, he likes to dabble with software, and more recently with IoT hardware. He has participated in a wide range of European research projects on behalf of W3C/ERCIM. He currently focuses on Web payments, and realising the potential for the Web of Things as an evolution from the Web of pages. Dave has a doctorate from the University of Oxford. He is a visiting professor at the University of the West of England, and lives in the UK in a small town near to Bath.
Resumo:
Resument tomado de la publicación
Resumo:
La experiencia en el uso de los servicios de mapas basados en la especificación Web Map Service (WMS) del Open Geospatial Consortium (OGC) ha demostrado que es necesario utilizar cachés de teselas para lograr un rendimiento aceptable en aplicaciones de difusión masiva, sin embargo no hay ningún mecanismo estándar para que los clientes de mapas aprovechen, a partir de la información proporcionada por el servidor de mapas, la disponibilidad de esta caché. A la espera de que la nueva recomendación WMTS se implante suficientemente, el mecanismo más extendido es la recomendación de perfil WMS-C de OsGeo. Para conseguir que la definición de mapas que contienen servicios WMSC sea lo más automática posible, se ha ampliado el servidor Geoserver para soportar un modelo de mapas de acuerdo con la recomendación WMC con algunas extensiones ad-hoc. La extensión desarrollada para Geoserver amplía su API REST para incluir soporte de WMC. De esta forma, cuando se registra una nueva configuración de mapa, mediante un documento WMC, en el que ciertas capas están cacheadas se procede automáticamente a la activación del cacheado mediante la extensión GeoWebCache. Para la utilización de las nuevas capacidades proporcionadas a Geoserver, se ha desarrollado un cliente de mapas que identifica la existencia de capas cacheadas y procede a utilizar, según convenga, los servicios cacheados y los servicios WMS tradicionales
Resumo:
gvSIG Mini es una aplicación open-source de usuario final cliente móvil de Infraestructura de Datos Espaciales IDEs con licencia GNU/ GPL, diseñada para teléfonos móviles Java y Android que permite la visualización y navegación sobre cartografía digital estructurada en tiles procedente de servicios web OGC como WMS(-C) y de servicios como OpenStreetMap (OSM), Yahoo Maps, Maps Bing, así como el almacenamiento en caché para reducir al mínimo el ancho de banda. gvSIG Mini puede acceder a servicios geoespaciales como NameFinder, para la búsqueda de puntos de interés y YOURS (Yet Another OpenStreetMap Routing Service) para el cálculo de rutas y la renderización de la información vectorial el lado del cliente. Por otra parte, gvSIG Mini también ofrece servicio de localización GPS. La versión de gvSIG Mini para Android, posee algunas características adicionales como son el soporte de localización Android o el uso del lacelerómetro para centrado. Esta versión también hace uso de servicios como son la predicción del tiempo o TweetMe que permite compartir una localización utilizando el popular servicio social Twitter. gvSIG Mini es una aplicación que puede ser descargada y usada libremente, convirtiéndose en una plataforma para el desarrollo de nuevas soluciones y aplicaciones en el campo de Location Based Services (LBS). gvSIG Mini ha sido desarrollado por Prodevelop, S.L. No es un proyecto oficial de gvSIG, pero se une a la familia a través del catálogo de extensiones no oficiales de gvSIG. Phone Cache es una extensión que funciona sobre gvSIG 1.1.2 que permite generar una caché, para poder utilizar gvSIG Mini para Java en modo desconectado
Resumo:
Mediterranean landscapes comprise a complex mosaic of different habitats that vary in the diversity of their floral communities, pollinator communities and pollination services. Using the Greek Island of Lesvos as a model system, we assess the biodiversity value of six common habitats and measure ecosystemic 'health' using pollen grain deposition in three core flowering plants as a measure of pollination services. Three fire-driven habitats were assessed: freshly burnt areas, fully regenerated pine forests and intermediate age scrub; in addition we examined oak woodlands, actively managed olive groves and groves that had been abandoned from agriculture. Oak woodlands, pine forests and managed olive groves had the highest diversity of bees. The habitat characteristics responsible for structuring bee communities were: floral diversity, floral abundance, nectar energy availability and the variety of nectar resources present. Pollination services in two of our plant species, which were pollinated by a limited sub-set of the pollinator community, indicated that pollination levels were highest in the burnt and mature pine habitats. The third species, which was open to all flower visitors, indicated that oak woodlands had the highest levels of pollination from generalist species. Pollination was always more effective in managed olive groves than in abandoned groves. However, the two most common species of bee, the honeybee and a bumblebee, were not the primary pollinators within these habitats. We conclude that the three habitats of greatest overall value for plant-pollinator communities and provision of the healthiest pollination services are pine forests, oak woodland and managed olive groves. We indicate how the highest value habitats may be maintained in a complex landscape to safeguard and enhance pollination function within these habitats and potentially in adjoining agricultural areas. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.
Resumo:
Providing high quality and timely feedback to students is often a challenge for many staff in higher education as it can be both time-consuming and frustratingly repetitive. From the student perspective, feedback may sometimes be considered unhelpful, confusing and inconsistent and may not always be provided within a timeframe that is considered to be ‘useful’. The ASSET project, based at the University of Reading, addresses many of these inherent challenges by encouraging the provision of feedback that supports learning, i.e. feedback that contains elements of ‘feed-forward’, is of a high quality and is delivered in a timely manner. In particular, the project exploits the pedagogic benefits of video/audio media within a Web 2.0 context to provide a new, interactive resource, ‘ASSET’, to enhance the feedback experience for both students and staff. A preliminary analysis of both our quantitative and qualitative pedagogic data demonstrate that the ASSET project has instigated change in the ways in which both staff and students think about, deliver, and engage with feedback. For example, data from our online questionnaires and focus groups with staff and students indicate a positive response to the use of video as a medium for delivering feedback to students. In particular, the academic staff engaged in piloting the ASSET resource indicated that i) using video has made them think more, and in some cases differently, about the ways in which they deliver feedback to students and ii) they now see video as an effective means of making feedback more useful and engaging for students. Moreover, the majority of academic staff involved in the project have said they will continue to use video feedback. From the student perspective, 60% of those students whose lecturers used ASSET to provide video feedback said that “receiving video feedback encouraged me to take more notice of the feedback compared with normal methods” and 80% would like their lecturer to continue to use video as a method for providing feedback. An important aim of the project was for it to complement existing University-wide initiatives on feedback and for ASSET to become a ‘model’ resource for staff and students wishing to explore video as a medium for feedback provision. An institutional approach was therefore adopted and key members of Senior Management, academics, T&L support staff, IT support and Student Representatives were embedded within the project from the start. As with all initiatives of this kind, a major issue is the future sustainability of the ASSET resource and to have had both ‘top-down’ and ‘bottom-up’ support for the project has been extremely beneficial. In association with the project team the University is currently exploring the creation of an open-source, two-tiered video supply solution and a ‘framework’ (that other HEIs can adopt and/or adapt) to support staff in using video for feedback provision. In this way students and staff will have new opportunities to explore video and to exploit the benefits of this medium for supporting learning.
Resumo:
Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.
Resumo:
High-density oligonucleotide (oligo) arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L.) Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM) probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P) stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ webcite and may be used to facilitate transcriptomic analyses of a wide range of plant and animal species in the absence of custom arrays.
Resumo:
The CHARMe project enables the annotation of climate data with key pieces of supporting information that we term “commentary”. Commentary reflects the experience that has built up in the user community, and can help new or less-expert users (such as consultants, SMEs, experts in other fields) to understand and interpret complex data. In the context of global climate services, the CHARMe system will record, retain and disseminate this commentary on climate datasets, and provide a means for feeding back this experience to the data providers. Based on novel linked data techniques and standards, the project has developed a core system, data model and suite of open-source tools to enable this information to be shared, discovered and exploited by the community.
Resumo:
For users of climate services, the ability to quickly determine the datasets that best fit one's needs would be invaluable. The volume, variety and complexity of climate data makes this judgment difficult. The ambition of CHARMe ("Characterization of metadata to enable high-quality climate services") is to give a wider interdisciplinary community access to a range of supporting information, such as journal articles, technical reports or feedback on previous applications of the data. The capture and discovery of this "commentary" information, often created by data users rather than data providers, and currently not linked to the data themselves, has not been significantly addressed previously. CHARMe applies the principles of Linked Data and open web standards to associate, record, search and publish user-derived annotations in a way that can be read both by users and automated systems. Tools have been developed within the CHARMe project that enable annotation capability for data delivery systems already in wide use for discovering climate data. In addition, the project has developed advanced tools for exploring data and commentary in innovative ways, including an interactive data explorer and comparator ("CHARMe Maps") and a tool for correlating climate time series with external "significant events" (e.g. instrument failures or large volcanic eruptions) that affect the data quality. Although the project focuses on climate science, the concepts are general and could be applied to other fields. All CHARMe system software is open-source, released under a liberal licence, permitting future projects to re-use the source code as they wish.
Resumo:
Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications, which include both clients and servers.