153 resultados para hypertext
Resumo:
Online enquiry communities such as Question Answering (Q&A) websites allow people to seek answers to all kind of questions. With the growing popularity of such platforms, it is important for community managers to constantly monitor the performance of their communities. Although different metrics have been proposed for tracking the evolution of such communities, maturity, the process in which communities become more topic proficient over time, has been largely ignored despite its potential to help in identifying robust communities. In this paper, we interpret community maturity as the proportion of complex questions in a community at a given time. We use the Server Fault (SF) community, a Question Answering (Q&A) community of system administrators, as our case study and perform analysis on question complexity, the level of expertise required to answer a question. We show that question complexity depends on both the length of involvement and the level of contributions of the users who post questions within their community. We extract features relating to askers, answerers, questions and answers, and analyse which features are strongly correlated with question complexity. Although our findings highlight the difficulty of automatically identifying question complexity, we found that complexity is more influenced by both the topical focus and the length of community involvement of askers. Following the identification of question complexity, we define a measure of maturity and analyse the evolution of different topical communities. Our results show that different topical communities show different maturity patterns. Some communities show a high maturity at the beginning while others exhibit slow maturity rate. Copyright 2013 ACM.
Resumo:
Topic classification (TC) of short text messages offers an effective and fast way to reveal events happening around the world ranging from those related to Disaster (e.g. Sandy hurricane) to those related to Violence (e.g. Egypt revolution). Previous approaches to TC have mostly focused on exploiting individual knowledge sources (KS) (e.g. DBpedia or Freebase) without considering the graph structures that surround concepts present in KSs when detecting the topics of Tweets. In this paper we introduce a novel approach for harnessing such graph structures from multiple linked KSs, by: (i) building a conceptual representation of the KSs, (ii) leveraging contextual information about concepts by exploiting semantic concept graphs, and (iii) providing a principled way for the combination of KSs. Experiments evaluating our TC classifier in the context of Violence detection (VD) and Emergency Responses (ER) show promising results that significantly outperform various baseline models including an approach using a single KS without linked data and an approach using only Tweets. Copyright 2013 ACM.
Resumo:
With the development of social media tools such as Facebook and Twitter, mainstream media organizations including newspapers and TV media have played an active role in engaging with their audience and strengthening their influence on the recently emerged platforms. In this paper, we analyze the behavior of mainstream media on Twitter and study how they exert their influence to shape public opinion during the UK's 2010 General Election. We first propose an empirical measure to quantify mainstream media bias based on sentiment analysis and show that it correlates better with the actual political bias in the UK media than the pure quantitative measures based on media coverage of various political parties. We then compare the information diffusion patterns from different categories of sources. We found that while mainstream media is good at seeding prominent information cascades, its role in shaping public opinion is being challenged by journalists since tweets from them are more likely to be retweeted and they spread faster and have longer lifespan compared to tweets from mainstream media. Moreover, the political bias of the journalists is a good indicator of the actual election results. Copyright 2013 ACM.
Resumo:
The value of Question Answering (Q&A) communities is dependent on members of the community finding the questions they are most willing and able to answer. This can be difficult in communities with a high volume of questions. Much previous has work attempted to address this problem by recommending questions similar to those already answered. However, this approach disregards the question selection behaviour of the answers and how it is affected by factors such as question recency and reputation. In this paper, we identify the parameters that correlate with such a behaviour by analysing the users' answering patterns in a Q&A community. We then generate a model to predict which question a user is most likely to answer next. We train Learning to Rank (LTR) models to predict question selections using various user, question and thread feature sets. We show that answering behaviour can be predicted with a high level of success, and highlight the particular features that inuence users' question selections.
Resumo:
Apple is a collection of poems that explores the connection between human relationships and the evolution of an identity. Multiple speakers investigate gender and sexuality, plentitude and poverty, atheism and Christianity in order to better understand some of the forces that affect a woman's consciousness. An awareness of perceived dualities, such as self and other, reason and faith, nature and technology, socialization and loneliness are central to this exploration. The poems employ various forms, such as ultra-talk narratives, lyrical meditations, prose poetry, epistolary poems and hypertext. The variety of structure and form in the collection mirrors the variety of approaches the speakers employ to move closer and further away from the subjects at hand. The rhetorical posture employed in each poem is directly linked to the speaker's relationship with the audience, which is an excellent example of a human relationship affecting the evolution of an identity.
Resumo:
Gracias al crecimiento, expansión y popularización de la World Wide Web, su desarrollo tecnológico tiene una creciente importancia en la sociedad. La simbiosis que protagonizan estos dos entornos ha propiciado una mayor influencia social en las innovaciones de la plataforma y un enfoque mucho más práctico. Nuestro objetivo en este artículo es describir, caracterizar y analizar el surgimiento y difusión del nuevo estándar de hipertexto que rige la Web; HTML5. Al mismo tiempo exploramos este proceso a la luz de varias teorías que aúnan tecnología y sociedad. Dedicamos especial atención a los usuarios de la World Wide Web y al uso genérico que realizan de los Medios Sociales o "Social Media". Sugerimos que el desarrollo de los estándares web está influenciado por el uso cotidiano de este nuevo tipo de tecnologías y aplicaciones.
Resumo:
Abstract This seminar consists of two very different research reports by PhD students in WAIS. Hypertext Engineering, Fettling or Tinkering (Mark Anderson): Contributors to a public hypertext such as Wikipedia do not necessarily record their maintenance activities, but some specific hypertext features - such transclusion - could indicate deliberate editing with a mind to the hypertext’s long-term use. The MediaWiki software used to create Wikipedia supports transclusion, a deliberately hypertextual form of content creation which aids long terms consistency. This discusses the evidence of the use of hypertext transclusion in Wikipedia, and its implications for the coherence and stability of Wikipedia. Designing a Public Intervention - Towards a Sociotechnical Approach to Web Governance (Faranak Hardcastle): In this talk I introduce a critical and speculative design for a socio-technical intervention -called TATE (Transparency and Accountability Tracking Extension)- that aims to enhance transparency and accountability in Online Behavioural Tracking and Advertising mechanisms and practices.
Resumo:
Análisis de la tragedia legendaria en tres actos publicada en 1962 por el dramaturgo francés Jean Geschwin, en la que, a partir de una original recreación del mito de Progne y Filomela, se expresa la preocupación y el desasosiego provocados por las dos décadas de sangrientos conflictos bélicos casi ininterrumpidos que estaban desgarrando a Francia.
Resumo:
Acompanha: Sequência didática: trabalhando o conceito e as características dos fungos: pesquisa de campo para identificação dos fungos
Resumo:
Two complementary de facto standards for the publication of electronic documents are HTML on theWorldWideWeb and Adobe s PDF (Portable Document Format) language for use with Acrobat viewers. Both these formats provide support for hypertext features to be embedded within documents. We present a method, which allows links and other hypertext material to be kept in an abstract form in separate link databases. The links can then be interpreted or compiled at any stage and applied, in the correct format to some specific representation such as HTML or PDF. This approach is of great value in keeping hyperlinks relevant, up-to-date and in a form which is independent of the finally delivered electronic document format. Four models are discussed for allowing publishers to insert links into documents at a late stage. The techniques discussed have been implemented using a combination of Acrobat plug-ins, Web servers and Web browsers.
Resumo:
Adobe's Acrobat software, released in June 1993, is based around a new Portable Document Format (PDF) which offers the possibility of being able to view and exchange electronic documents, independent of the originating software, across a wide variety of supported hardware platforms (PC, Macintosh, Sun UNIX etc.). The fact that Acrobat's imageable objects are rendered with full use of Level 2 PostScript means that the most demanding requirements can be met in terms of high-quality typography and device-independent colour. These qualities will be very desirable components in future multimedia and hypermedia systems. The current capabilities of Acrobat and PDF are described; in particular the presence of hypertext links, bookmarks, and yellow sticker annotations (in release 1.0) together with article threads and multi-media plugins in version 2.0, This article also describes the CAJUN project (CD-ROM Acrobat Journals Using Networks) which has been investigating the automated placement of PDF hypertextual features from various front-end text processing systems. CAJUN has also been experimenting with the dissemination of PDF over e-mail, via World Wide Web and on CDROM.
Resumo:
Adobe's Acrobat software, released in June 1993, is based around a new Portable Document Format (PDF) which offers the possibility of being able to view and exchange electronic documents, independent of the originating software, across a wide variety of supported hardware platforms (PC, Macintosh, Sun UNIX etc.). The fact that the imageable objects are rendered with full use of Level 2 PostScript means that the most demanding requirements can be met in terms of high-quality typography, device-independent colour and full page fidelity with respect to the printed version. PDF possesses an internal structure which supports hypertextual features, and a range of file compression options. In a sense PDF establishes a low-level multiplatform machine code for imageable objects but its notion of hypertext buttons and links is similarly low-level , in that they are anchored to physical locations on xed pages. However, many other hypertext systems think of links as potentially spanning multiple files, which may in turn be located on various machines scattered across the Internet. The immediate challenge is to bridge the "abstraction gap" between high-level notions of a link and PDF's positionally-anchored low-level view. More specifically, how can Mosaic, WWW and Acrobat/PDF be configured so that the notions of "link ", in the various systems, work together harmoniously? This paper reviews progress so far on the CAJUN project (CD-ROM Acrobat Journals Using Networks) with particular reference to experiments that have already taken place in disseminating PDF via e-mail, Gopher and FTP. The prospects for integrating Acrobat seamlessly with WWW are then discussed.
Resumo:
Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.
Resumo:
Tese (doutorado)—Universidade de Brasília, Faculdade de Educação, Programa de Pós-graduação em Educação, 2016.