986 resultados para Participatory Content Creation
Resumo:
The concept of cultural diversity has emerged as an influential one having impact on multiple policy and legal instruments especially following the adoption of the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions in 2005. The discussions on its appropriate implementation are however profoundly fragmented and often laden with political considerations. The present brief paper offers some thoughts on the meaning of cultural diversity and its implementation in the digital networked environment, taking into account the effects of digital media upon cultural content creation, distribution and consumption. The paper was meant to be part of a document prepared by a civil society organisation for the OECD ministerial meeting in Seoul 2008 on the future of the Internet.
Resumo:
MPEG-M is a suite of ISO/IEC standards (ISO/IEC 23006) that has been developed under the auspices of Moving Picture Experts Group (MPEG). MPEG-M, also known as Multimedia Service Platform Technologies (MSPT), facilitates a collection of multimedia middleware APIs and elementary services as well as service aggregation so that service providers can offer users a plethora of innovative services by extending current IPTV technology toward the seamless integration of personal content creation and distribution, e-commerce, social networks and Internet distribution of digital media.
Resumo:
Esta tesis presenta un estudio exhaustivo sobre la evaluación de la calidad de experiencia (QoE, del inglés Quality of Experience) percibida por los usuarios de sistemas de vídeo 3D, analizando el impacto de los efectos introducidos por todos los elementos de la cadena de procesamiento de vídeo 3D. Por lo tanto, se presentan varias pruebas de evaluación subjetiva específicamente diseñadas para evaluar los sistemas considerados, teniendo en cuenta todos los factores perceptuales relacionados con la experiencia visual tridimensional, tales como la percepción de profundidad y la molestia visual. Concretamente, se describe un test subjetivo basado en la evaluación de degradaciones típicas que pueden aparecer en el proceso de creación de contenidos de vídeo 3D, por ejemplo debidas a calibraciones incorrectas de las cámaras o a algoritmos de procesamiento de la señal de vídeo (p. ej., conversión de 2D a 3D). Además, se presenta el proceso de generación de una base de datos de vídeos estereoscópicos de alta calidad, disponible gratuitamente para la comunidad investigadora y que ha sido utilizada ampliamente en diferentes trabajos relacionados con vídeo 3D. Asimismo, se presenta otro estudio subjetivo, realizado entre varios laboratorios, con el que se analiza el impacto de degradaciones causadas por la codificación de vídeo, así como diversos formatos de representación de vídeo 3D. Igualmente, se describen tres pruebas subjetivas centradas en el estudio de posibles efectos causados por la transmisión de vídeo 3D a través de redes de televisión sobre IP (IPTV, del inglés Internet Protocol Television) y de sistemas de streaming adaptativo de vídeo. Para estos casos, se ha propuesto una innovadora metodología de evaluación subjetiva de calidad vídeo, denominada Content-Immersive Evaluation of Transmission Impairments (CIETI), diseñada específicamente para evaluar eventos de transmisión simulando condiciones realistas de visualización de vídeo en ámbitos domésticos, con el fin de obtener conclusiones más representativas sobre la experiencia visual de los usuarios finales. Finalmente, se exponen dos experimentos subjetivos comparando varias tecnologías actuales de televisores 3D disponibles en el mercado de consumo y evaluando factores perceptuales de sistemas Super Multiview Video (SMV), previstos a ser la tecnología futura de televisores 3D de consumo, gracias a una prometedora visualización de contenido 3D sin necesidad de gafas específicas. El trabajo presentado en esta tesis ha permitido entender los factores perceptuales y técnicos relacionados con el procesamiento y visualización de contenidos de vídeo 3D, que pueden ser de utilidad en el desarrollo de nuevas tecnologías y técnicas de evaluación de la QoE, tanto metodologías subjetivas como métricas objetivas. ABSTRACT This thesis presents a comprehensive study of the evaluation of the Quality of Experience (QoE) perceived by the users of 3D video systems, analyzing the impact of effects introduced by all the elements of the 3D video processing chain. Therefore, various subjective assessment tests are presented, particularly designed to evaluate the systems under consideration, and taking into account all the perceptual factors related to the 3D visual experience, such as depth perception and visual discomfort. In particular, a subjective test is presented, based on evaluating typical degradations that may appear during the content creation, for instance due to incorrect camera calibration or video processing algorithms (e.g., 2D to 3D conversion). Moreover, the process of generation of a high-quality dataset of 3D stereoscopic videos is described, which is freely available for the research community, and has been already widely used in different works related with 3D video. In addition, another inter-laboratory subjective study is presented analyzing the impact of coding impairments and representation formats of stereoscopic video. Also, three subjective tests are presented studying the effects of transmission events that take place in Internet Protocol Television (IPTV) networks and adaptive streaming scenarios for 3D video. For these cases, a novel subjective evaluation methodology, called Content-Immersive Evaluation of Transmission Impairments (CIETI), was proposed, which was especially designed to evaluate transmission events simulating realistic home-viewing conditions, to obtain more representative conclusions about the visual experience of the end users. Finally, two subjective experiments are exposed comparing various current 3D displays available in the consumer market, and evaluating perceptual factors of Super Multiview Video (SMV) systems, expected to be the future technology for consumer 3D displays thanks to a promising visualization of 3D content without specific glasses. The work presented in this thesis has allowed to understand perceptual and technical factors related to the processing and visualization of 3D video content, which may be useful in the development of new technologies and approaches for QoE evaluation, both subjective methodologies and objective metrics.
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
How can technical communicators in organizations benefit from wiki technology? This article alerts technical communicators to the possibilities of wiki-based collaborative content creation. It analyzes 32 articles on the use of corporate wikis, and compares them to three media choice theories: media richness theory, theory of media synchronicity, and common ground theory.
Resumo:
Abstract This seminar consists of two very different research reports by PhD students in WAIS. Hypertext Engineering, Fettling or Tinkering (Mark Anderson): Contributors to a public hypertext such as Wikipedia do not necessarily record their maintenance activities, but some specific hypertext features - such transclusion - could indicate deliberate editing with a mind to the hypertext’s long-term use. The MediaWiki software used to create Wikipedia supports transclusion, a deliberately hypertextual form of content creation which aids long terms consistency. This discusses the evidence of the use of hypertext transclusion in Wikipedia, and its implications for the coherence and stability of Wikipedia. Designing a Public Intervention - Towards a Sociotechnical Approach to Web Governance (Faranak Hardcastle): In this talk I introduce a critical and speculative design for a socio-technical intervention -called TATE (Transparency and Accountability Tracking Extension)- that aims to enhance transparency and accountability in Online Behavioural Tracking and Advertising mechanisms and practices.
Resumo:
Chapter 6 concerns ‘Designing and developing digital and blended learning solutions’, however, despite its title, it is not aimed at developing L&D professionals to be technologists (in so much as how Chapter 3 is not aimed at developing L&D professionals to be accounting and financial experts). Chapter 6 is about developing L&D professionals to be technology savvy. In doing so, I adopt a culinary analogy in presenting this chapter, where the most important factors in creating a dish (e.g. blended learning), are the ingredients and the flavour each of it brings. The chapter first explores the typical technologies and technology products that are available for learning and development i.e. the ingredients. I then introduce the data Format, Interactivity/ Immersion, Timing, Content (creation and curation), Connectivity and Administration (FITCCA) framework, that helps L&D professionals to look beyond the labels of technologies in identifying what the technology offers, its functions and features, which is analogous to the ‘flavours’ of the ingredients. The next section discusses some multimedia principles that are important for L&D professionals to consider in designing and developing digital learning solutions. Finally, whilst there are innumerable permutations of blended learning, this section focuses on the typical emphasis in blended learning and how technology may support such blends.
Resumo:
Despite the growing popularity of participatory video as a tool for facilitating youth empowerment, the methodology and impacts of the practice are extremely understudied. This paper describes a study design created to examine youth media methodology and the ethical dilemmas that arose in its attempted implementation. Specifically, elements that added “rigor” to the study (i.e., randomization, pre- and post-measures, and an intensive interview) conflicted with the fundamental tenets of youth participation. The paper concludes with suggestions for studying participatory media methodologies that are more in line with an ethics of participation.
Resumo:
User-generated content in travel industry is the phenomenon studied in this research, which aims to fill the literature gap on the drivers to write reviews on TripAdvisor. The object of study is relevant from a managerial standpoint since the motivators that drive users to co-create can shape strategies and be turned into external leverages that generate value for brands through content production. From an academic perspective, the goal is to enhance literature on the field, and fill a gap on adherence of local culture to UGC given industry structure specificities. The business’ impact of UGC is supported by the fact that it increases e-commerce conversion rates since research undertaken by Ye, Law, Gu and Chen (2009) states each 10% in traveler review ratings boosts online booking in more than 5%. The literature review builds a theoretical framework on required concepts to support the TripAdvisor case study methodology. Quantitative and qualitative data compound the methodological approach through literature review, desk research, executive interview, and user survey which are analyzed under factor and cluster analysis to group users with similar drivers towards UGC. Additionally, cultural and country-specific aspects impact user behavior. Since hospitality industry in Brazil is concentrated on long tail – 92% of hotels in Brazil are independent ones (Jones Lang LaSalle, 2015, p. 7) – and lesser known hotels take better advantage of reviews – according to Luca (2011) each one Yelp-star increase in rating, increases in 9% independent restaurant revenue whereas in chain restaurants the reviews have no effect – , this dissertation sought to understand UGC in the context of travelers from São Paulo (Brazil) and adopted the case of TripAdvisor to describe what are the incentives that drives user’s co-creation among targeted travelers. It has an outcome of 4 different clusters with different drivers for UGC that enables to design marketing strategies, and it also concludes there’s a big potential to convert current content consumers into producers, the remaining importance of friends and family referrals and the role played by incentives. Among the conclusions, this study lead us to an exploration of positive feedback and network effect concepts, a reinforcement of the UGC relevance for long tail hotels, the interdependence across content production, consumption and participation; and the role played by technology allied with behavioral analysis to take effective decisions. The adherence of UGC to hospitality industry, also outlines the formulation of the concept present in the dissertation title of “Traveler-Generated Content”.
Resumo:
In Brazil the 1990s constituted years of institutional achievements in the fields of housing and urban rights, given the incorporation of the principles of the social function of cities and property, the recognition of tenure rights for slum dwellers and the direct participation of citizens in the decision making process of urban policies, within the 1988 Constitution. These proposals have become the pillars of the Urban Reform agenda which has penetrated the federal government apparatus since the creation of the Ministry of Cities under Lula's administration. The article evaluates the limits and possibilities for the implementation of this agenda through the analysis of two policies proposed by the Ministry: the National Council of Cities and the campaign for Participatory Master Plans. The approach is based on the organization of the Brazilian State in terms of urban development, the relationship with the political system and the characteristics of Brazilian democracy.
Resumo:
The general objective of this work was to study the contribution of the ERP for the quality of the managerial accounting information, through the perception of managers of large sized Brazilian companies. The initial principle was that, presently, we live in an enterprise reality characterized by global and competitive worldwide scenery where the information about the enterprise performance and the evaluation of the intangible assets are necessary conditions for the survival, of the companies. The research of the exploratory type is based on a sample of 37 managers of large sized-Brazilian companies. The analysis of the data treated by means of the qualitative method showed that the great majority of the companies of the sample (86%) possess an ERP implanted. It also showed that this system is used in combination with other applicative software. The managers, in its majority, were also satisfied with the information generated in relation to the dimensions Time and Content. However, with regard to the qualitative nature of the information, the ERP made some analysis possible when the Balanced Scorecard was adopted, but information able to provide an estimate of the investments carried through in the intangible assets was not obtained. These results Suggest that in these companies ERP systems are not adequate to support strategic decisions.
USE AND CONSEQUENCES OF PARTICIPATORY GIS IN A MEXICAN MUNICIPALITY: APPLYING A MULTILEVEL FRAMEWORK
Resumo:
This paper seeks to understand the use and the consequences of Participatory Geographic Information System (PGIS) in a Mexican local community. A multilevel framework was applied, mainly influenced by two theoretical lenses – structurationist view and social shaping of technology – structured in three dimensions – context, process and content – according to contextualist logic. The results of our study have brought two main contributions. The first is the refinement of the theoretical framework in order to better investigate the implementation and use of Information and Communication Technology (ICT) artifacts by local communities for social and environmental purposes. The second contribution is the extension of existing IS (Information Systems) literature on participatory practices through identification of important conditions for helping the mobilization of ICT as a tool for empowering local communities.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Com o aumento de plataformas móveis disponíveis no mercado e com o constante incremento na sua capacidade computacional, a possibilidade de executar aplicações e em especial jogos com elevados requisitos de desempenho aumentou consideravelmente. O mercado dos videojogos tem assim um cada vez maior número de potenciais clientes. Em especial, o mercado de jogos massive multiplayer online (MMO) tem-se tornado muito atractivo para as empresas de desenvolvimento de jogos. Estes jogos suportam uma elevada quantidade de jogadores em simultâneo que podem estar a executar o jogo em diferentes plataformas e distribuídos por um "mundo" de jogo extenso. Para incentivar a exploração desse "mundo", distribuem-se de forma inteligente pontos de interesse que podem ser explorados pelo jogador. Esta abordagem leva a um esforço substancial no planeamento e construção desses mundos, gastando tempo e recursos durante a fase de desenvolvimento. Isto representa um problema para as empresas de desenvolvimento de jogos, e em alguns casos, e impraticável suportar tais custos para equipas indie. Nesta tese e apresentada uma abordagem para a criação de mundos para jogos MMO. Estudam-se vários jogos MMO que são casos de sucesso de modo a identificar propriedades comuns nos seus mundos. O objectivo e criar uma framework flexível capaz de gerar mundos com estruturas que respeitam conjuntos de regras definidas por game designers. Para que seja possível usar a abordagem aqui apresentada em v arias aplicações diferentes, foram desenvolvidos dois módulos principais. O primeiro, chamado rule-based-map-generator, contem a lógica e operações necessárias para a criação de mundos. O segundo, chamado blocker, e um wrapper à volta do módulo rule-based-map-generator que gere as comunicações entre servidor e clientes. De uma forma resumida, o objectivo geral e disponibilizar uma framework para facilitar a geração de mundos para jogos MMO, o que normalmente e um processo bastante demorado e aumenta significativamente o custo de produção, através de uma abordagem semi-automática combinando os benefícios de procedural content generation (PCG) com conteúdo gráfico gerado manualmente.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.