138 resultados para web 3.0
Resumo:
The Semantic Web is an extension of the traditional Web in which meaning of information is well defined, thus allowing a better interaction between people and computers. To accomplish its goals, mechanisms are required to make explicit the semantics of Web resources, to be automatically processed by software agents (this semantics being described by means of online ontologies). Nevertheless, issues arise caused by the semantic heterogeneity that naturally happens on the Web, namely redundancy and ambiguity. For tackling these issues, we present an approach to discover and represent, in a non-redundant way, the intended meaning of words in Web applications, while taking into account the (often unstructured) context in which they appear. To that end, we have developed novel ontology matching, clustering, and disambiguation techniques. Our work is intended to help bridge the gap between syntax and semantics for the Semantic Web construction
Resumo:
Al igual que otras asignaturas pero quizás de manera más pronunciada las matemáticas, están viendo reducidos en gran medida sus créditos en los nuevos planes de estudios. Por ello, ofertar acciones que posibiliten alcanzar competencias relacionadas con esta y otras ciencias básicas resulta de gran utilidad. Con este propósito, desde el Grupo de Innovación Educativa de la Universidad Politécnica de Madrid “Pensamiento Matemático”, se ofrece a los alumnos un “Aula de Pensamiento Matemático”. En ella se presentan una serie de actividades on-line que permiten la capacitación de los alumnos en diversas competencias transversales, la mayoría relacionadas con el pensamiento matemático.
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to congure the annotations to their specic needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation condence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
Resumo:
Pinus pinaster is an economically and ecologically important species that is becoming a woody gymnosperm model. Its enormous genome size makes whole-genome sequencing approaches are hard to apply. Therefore, the expressed portion of the genome has to be characterised and the results and annotations have to be stored in dedicated databases.
Resumo:
En el artículo introduciremos en primer lugar el nuevo entorno en que se enmarca el cambio sustancial en la naturaleza de la Web tal y como la conocemos. Adoptaremos para ello un enfoque híbrido, diseccionando con un poco más de detalle los elementos tecnológicos, socioeconómicos, administrativos y legislativos que, a nuestro parecer, pueden conformar el contexto que enmarcaría un escenario factible para la introducción de una Gobernanza 2.0 ?de nueva generación? capaz de integrar de una manera sostenible ?ecosistémicamente viable? tecnologías y procesos que soporten de forma natural el paso de una era post-industrial de la información en la que vivimos hace unos años, hacia una sociedad de la información, poblada por verdaderos infociudadanos (nativos digitales en una creciente mayoría) y que debería ser sólo el paso previo para la última utopía: la sociedad del conocimiento, un estadio de la evolución sociotécnica al que se pretende llegar por el ancestral método de la profecía autocumplida, ignorando una realidad vigente considerablemente compleja.
Resumo:
Nowadays video and web conferencing systems have become effective tools for communication and collaboration inside organizations. However, although these systems have evolved and now provide very nice features (e.g. sharing multimedia and documents), they are still too focused on the moment the meeting takes place. The existing systems provide very few facilities to organize the meeting and they do not take advantage of the possibilities the generated content offers once the meeting is finished. In this paper, we analyze the life cycle of a web conference and how existing systems monitor these conferences. Finally we present our solution, based on our know-how in videoconference management and our experience with these existing systems.
Resumo:
This paper describes the CyberAula 2.0 project which presents an integrated solution for videoconferencing and lecture recording as a mechanism to support subjects which need to be promoted or discontinued within the framework of the European convergence process. Our solution is made up of a web portal, a videoconferencing tool and an economical and easily transportable hardware kit. Recording sessions can be exported to SCORM and LOM compliant files which can be imported by an LMS. The validation process is currently being carried out in five scenarios at our university that use Moodle as a way to deliver content to students.
Resumo:
En este artículo presentamos un sistema de videoconferencia web de bajo coste cuyo objetivo es mejorar la comunicación entre Atención Primaria y Atención Especializada optimizando los recursos y la calidad de la atención en enfermedades con alta prevalencia en la actualidad. En este caso se utiliza para problemas metabólicos como la diabetes o patologías del tiroides, aunque podría ser aplicado a otras patologías. El sistema está basado en una herramienta de SW libre (OpenMeetings) adaptada a nuestras necesidades y a la que se han añadido funcionalidades importantes como una sala de espera virtual o la administración de agendas. eCONSULTA ha sido instalado en el Servicio de Endocrinología y Nutrición del Hospital de Sabadell e integrado en el sistema de información médico de los Centros de Atención Primaria de la comarca del Vallés Occidental, provincia de Barcelona. En el momento de la redacción del artículo se está realizando un estudio de viabilidad y satisfacción de los usuarios.
Resumo:
In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario.
Resumo:
WCAG 2.0 was published in December 2008. It has many differences to WCAG 1.0 as to rationale, structure and content. Two years later there are still few tools supporting WCAG 2.0, and none of them fully mirrors the WCAG 2.0 approach organized around principles, guidelines, success criteria, situations and techniques. This paper describes the on-going development of an update to the Hera-FFX Firefox extension to support WCAG 2.0. The description is focused on the challenges that we have found and our resulting decisions.
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
End-User Development Success Factors and their Application to Composite Web Development Environments
Resumo:
The Future Internet is expected to be composed of a mesh of interoperable Web services accessed from all over the Web. This approach has not yet caught on since global user-service interaction is still an open issue. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. The weakness lies in the abstraction of the underlying service front-end architecture rather than the infrastructure technologies themselves. In our opinion, the best approach is to offer end-to-end composition from user interface to service invocation, as well as an understandable abstraction of both building blocks and a visual composition technique. In this paper we formalize our vision with regard to the next-generation front-end Web technology that will enable integrated access to services, contents and things in the Future Internet. We present a novel reference architecture designed to empower non-technical end users to create and share their own self-service composite applications. A tool implementing this architecture has been developed as part of the European FP7 FAST Project and EzWeb Project, allowing us to validate the rationale behind our approach.
Resumo:
In the paper we report on the results of our experiments on the construction of the opinion ontology. Our aim is to show the benefits of publishing in the open, on the Web, the results of the opinion mining process in a structured form. On the road to achieving this, we attempt to answer the research question to what extent opinion information can be formalized in a unified way. Furthermore, as part of the evaluation, we experiment with the usage of Semantic Web technologies and show particular use cases that support our claims.
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based