928 resultados para WORLD WIDE WEB (SERVICIO DE INFORMACION SOBRE REDES) - ASPECTOS CULTURALES - COLOMBIA
Resumo:
Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.
Resumo:
Abstract: It is estimated that 1 in 5 will, at some point in their lives, experience a long-term illness or disability that will impact their day to day lives. Access to digital information and technologies can be life changing and a necessity to fully participate in education, work and society. Specialist assistive technologies, such as screen readers, have been available for many years and are now built-into operating systems and devices. In addition, web accessibility standards have been compiled and published since the advent of the World Wide Web over two decades ago. However, internet use by people with disabilities continues to lag significantly behind those with no disability and use of assistive technologies remains lower than should be the case with tools often abandoned. In this seminar we will talk about our work to identify digital accessibility challenges; the barriers experienced by those with disabilities and how computer scientists can play a part in removing obstacles to access and ease of use. We will discuss some of our projects focussing on: • Development of assistive technologies for niche groups of users, • improving accessibility standards to cover a wider range of disabilities, • creating accessibility training resources for developers and stakeholders • embedding accessibility practice within development projects.
Resumo:
The XML-based specification for Scalable Vector Graphics (SVG), sponsored by the World Wide Web consortium, allows for compact and descriptive vector graphics for the Web. SVG s domain of discourse is that of graphic primitives whose optional attributes express line thickness, fill patterns, text size and so on. These primitives have very different properties from those of traditional document components (e.g. sections, paragraphs etc.) that XML is normally called upon to express. This paper describes a set of three tools for creating SVG, either from first principles or via the conversion of existing formats. The ab initio generation of SVG is effected from a server-side CGI script, using a PERL library of drawing functions; later sections highlight the problems of converting Adobe PostScript and Macromedia s Shockwave format (SWF) into SVG.
Resumo:
For some years now the Internet and World Wide Web communities have envisaged moving to a next generation of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of published objects . These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOI) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOI organisation, founded by a consortium of publishers and based in Washington DC. The DOI is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via metadata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.
Resumo:
As the amount of material on the World Wide Web continues to grow, users are discovering that the Web's embedded, hard-coded, links are difficult to maintain and update. Hyperlinks need a degree of abstraction in the way they are specified together with a sound underlying document structure and the property of separability from the documents they are linking. The case is made by studying the advantages of program/data separation in computer system architectures and also by re-examining some selected hypermedia systems that have already implemented separability. The prospects for introducing more abstract links into future versions of HTML and PDF, via emerging standards such as XPath, XPointer XLink and URN, are briefly discussed.
Resumo:
Adobe's Acrobat software, released in June 1993, is based around a new Portable Document Format (PDF) which offers the possibility of being able to view and exchange electronic documents, independent of the originating software, across a wide variety of supported hardware platforms (PC, Macintosh, Sun UNIX etc.). The fact that Acrobat's imageable objects are rendered with full use of Level 2 PostScript means that the most demanding requirements can be met in terms of high-quality typography and device-independent colour. These qualities will be very desirable components in future multimedia and hypermedia systems. The current capabilities of Acrobat and PDF are described; in particular the presence of hypertext links, bookmarks, and yellow sticker annotations (in release 1.0) together with article threads and multi-media plugins in version 2.0, This article also describes the CAJUN project (CD-ROM Acrobat Journals Using Networks) which has been investigating the automated placement of PDF hypertextual features from various front-end text processing systems. CAJUN has also been experimenting with the dissemination of PDF over e-mail, via World Wide Web and on CDROM.
Resumo:
A Região Autónoma da Madeira é uma região turística com uma oferta cultural intensa e diversificada, que é dinamizada por um considerável número de entidades. A necessidade de gerir toda essa oferta torna-se cada vez mais premente. Com esta necessidade surge o conceito de agenda cultural única, um mecanismo que congregue toda a atividade cultural desenvolvida na região. O projeto Agenda Cultural da Região Autónoma da Madeira, designado por CultuRAM, consiste numa aplicação web direcionada às entidades regionais que desenvolvam atividades no domínio da cultura. Essas entidades podem ser públicas ou privadas que estejam ligadas à promoção e divulgação de eventos culturais. Esta plataforma de gestão e divulgação de conteúdos tem por principal objetivo a centralização da gestão e divulgação da atividade cultural desenvolvida na região, posicionando-se como uma agenda cultural única. Com esta ferramenta pretende-se criar as condições necessárias aos diversos intervenientes, de modo a assegurar uma melhor oferta cultural, tanto aos residentes, como aos turistas que nos visitam. Este relatório descreve e documenta os métodos de investigação e fases de desenvolvimento do projeto, com enfase na referência e fundamentação dos modelos e tecnologias utilizadas.
Resumo:
Graphs are powerful tools to describe social, technological and biological networks, with nodes representing agents (people, websites, gene, etc.) and edges (or links) representing relations (or interactions) between agents. Examples of real-world networks include social networks, the World Wide Web, collaboration networks, protein networks, etc. Researchers often model these networks as random graphs. In this dissertation, we study a recently introduced social network model, named the Multiplicative Attribute Graph model (MAG), which takes into account the randomness of nodal attributes in the process of link formation (i.e., the probability of a link existing between two nodes depends on their attributes). Kim and Lesckovec, who defined the model, have claimed that this model exhibit some of the properties a real world social network is expected to have. Focusing on a homogeneous version of this model, we investigate the existence of zero-one laws for graph properties, e.g., the absence of isolated nodes, graph connectivity and the emergence of triangles. We obtain conditions on the parameters of the model, so that these properties occur with high or vanishingly probability as the number of nodes becomes unboundedly large. In that regime, we also investigate the property of triadic closure and the nodal degree distribution.
Resumo:
El objetivo de este Trabajo Final de Graduación fue formular una propuesta de diseño y construcción de una Intranet, que permita al Centro de Documentación "Alvaro Castro Jenkins" del Banco Central de Costa Rica, administrar los recursos, servicios, y productos de información de acceso confidencial, discrecional y restringido a los funcionarios del Banco, con una red privada de comunicaciones, basada en los protocolos TCP/IP y tecnologías de World Wide Web (WWW). La implementación de la Unidad de Información del OVSICORI se propuso a partir del diagnóstico de la infraestructura, de los recursos tecnológicos, recursos documentales, financieros, humanos y necesidades de los usuarios reales y potenciales. Este Trabajo Final de Graduación consistió en una investigación descriptiva en la que se realizó un diagnóstico de los servicios de información existentes en la Biblioteca Conjunta de la Corte Interamericana de Derechos Humanos y del Instituto Interamericano de Derechos Humanos, con el fin de analizar la pertinencia y utilidad de los mismos y proponer un programa de mejoramiento continuo para los servicios y productos existentes, así como el diseño de nuevos servicios y productos de información.
Resumo:
"January 20, 1997."
Resumo:
36 p.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2010
Resumo:
We describe the Joint Effort-Topic (JET) model and the Author Joint Effort-Topic (aJET) model that estimate the effort required for users to contribute on different topics. We propose to learn word-level effort taking into account term preference over time and use it to set the priors of our models. Since there is no gold standard which can be easily built, we evaluate them by measuring their abilities to validate expected behaviours such as correlations between user contributions and the associated effort.
Resumo:
Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or clas- sification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a conse- quence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking.The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a “model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies” in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision.As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking sys- tem for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
Resumo:
Na unidade curricular de Biologia do Desenvolvimento, lecionada desde 2011 ao 2º ano da licenciatura de Biologia Humana na Universidade de Évora, tem-se aplicado um modelo de avaliação que visa focalizar a aprendizagem através da construção de trabalhos-projeto que são realizados, em duas fases, por dois grupos de alunos. Este modelo de avaliação foi motivado tanto pelo facto do conteúdo desta unidade curricular ser extremamente complexo para uma avaliação mais convencional, como pela oportunidade que dá aos alunos de se aproximarem do contexto profissional de construção do conhecimento científico por interação entre pares. Na primeira fase de avaliação, o tema e a composição do grupo são escolhidos pelos alunos, sendo completada a pouco mais de meio do semestre com a submissão dum resumo (revisão) da informação coligida, bibliografia, e uma breve apresentação para os colegas, que deste modo ficam com uma panorâmica das temáticas a desenvolver no respetivo ano letivo. Na segunda fase essa informação é legada a um novo grupo, que desta vez é sorteado entre a turma, e que deverá seguir um guião preparado pelo docente, com base no trabalho previamente apresentado, para preparar uma apresentação final, em sessão pública. Cada tema tem por isso dois grupos de alunos diretamente envolvidos, que se interligam num elo de responsabilidade mútua que irá desembocar na sessão pública final. O nome proposto para este modelo, Legato, realça esta interligação. As apresentações finais, depois de eventuais correções, ficam disponíveis publicamente na World Wide Web. O papel das aulas é ir introduzindo os alunos à disciplina científica de Biologia do Desenvolvimento, facultando-lhes os conceitos, a linguagem, e sobretudo o raciocínio que lhe são próprios. Em duas sessões práticas apresenta-se aos alunos o mundo das bases de dados biológicas e bibliográficas (respetivamente), para que consigam um levantamento mais eficiente de fontes relevantes. Decorridos 5 anos desta prática de avaliação, com mais de 30 destes trabalhosprojeto já realizados, tem-se verificado um aproveitamento muito satisfatório na generalidade dos temas, mesmo com aqueles (poucos) que sofreram da falta de empenhamento do grupo da 1ª fase. O caráter público da fase final de avaliação, de par com a preparação (por parte do docente) do guião da 2ª fase, contribuem para garantir a qualidade das apresentações finais. Mas o autor acredita que a principal motivação para a qualidade generalizada dos resultados é o elo de responsabilidade mútua referido acima, bem como uma sensação de “pertença” em relação aos temas escolhidos. Este modelo de avaliação pode ser adaptado em qualquer área do conhecimento, pois a construção do conhecimento entre pares, que se pretende ilustrar neste modelo de avaliação, é essencialmente universal; e também em diferentes fases da formação, se bem que seja especialmente adequado a unidades curriculares do 1º Ciclo do Ensino Superior.