296 resultados para ternary content addressable memory
em Queensland University of Technology - ePrints Archive
Resumo:
Institutions of public memory are increasingly undertaking co-creative media initiatives in which community members create content with the support of institutional expertise and resources. This paper discusses one such initiative: the State Library of Queensland’s ‘Responses to the Apology’, which used a collaborative digital storytelling methodology to co-produce seven short videos capturing individual responses to Prime Minister Kevin Rudd’s 2008 ‘Apology to Australia’s Indigenous Peoples’. In examining this program, we are interested not only in the juxtaposition of ‘ordinary’ responses to an ‘official’ event, but also in how the production and display of these stories might also demonstrate a larger mediatisation of public memory.
Resumo:
This chapter sets out the debates about the changing role of audiences in relation to user-created content as they appear in New Media and Cultural Studies. The discussion moves beyond the simple dichotomies between active producers and passive audiences, and draws on empirical evidence, in order to examine those practices that are most ordinary and widespread. Building on the knowledge of television’s role in facilitating public life, and the everyday, affective practices through which it is experienced and used, I focus on the way in which YouTube operates as a site of community, creativity and cultural citizenship; and as an archive of popular cultural memory.
Resumo:
Story Circle is the first collection ever devoted to a comprehensive international study of the digital storytelling movement. Exploring subjects of central importance on the emergent and ever-shifting digital landscape-consumer-generated content, memory grids, the digital storytelling youth movement, and micro-documentary- Story Circle pinpoints who is telling what stories, where, on what terms, and what they look and sound like.
Resumo:
In studies of media industries, too much attention has been paid to providers and firms, too little to consumers and markets. But with user-created content, the question first posed more than a generation ago by the uses & gratifications method and taken up by semiotics and the active audience tradition (‘what do audiences do with media?’), has resurfaced with renewed force. What’s new is that where this question (of what the media industries and audiences did with each other) used to be individualist and functionalist, now, with the advent of social networks using Web 2.0 affordances, it can be re-posed at the level of systems and populations as well.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.