20 resultados para user generated content

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter investigates the conflicting demands faced by web designers in the development of social e-atmospherics that aim to encourage e-value creation, thus strengthening and prolonging market planning strategies. While recent studies have shown that significant shifts are occurring concerning the importance of users’ generated content by way of social e-communication tools (e.g. blogs), these trends are also creating expectations that social and cultural cues ought to become a greater part of e-atmospherics and e-business strategies. Yet, there is growing evidence that organizations are resisting such efforts, fearing that they will lose control of their e-marketing strategy. This chapter contributes to the theory and literature on online cross-cultural understanding and the impact website designers (meso-level) can have on improving the sustainability of e-business planning, departing from recent studies that focus mainly on firms’ e-business plans (macro-level) or final consumers (micro-level). A second contribution is made with respect to online behavior regarding the advancement of technologies that facilitate the development and shaping of new social e-atmospherics that affect users’ behavior and long term e-business strategies through the avoidance of traditional, formal decision making processes and marketing strategy mechanisms implemented by firms. These issues have been highlighted in the literature on the co-production and co-creation of value, which few organizations have thus far integrated in their strategic and pragmatic e-business plans. Drawing upon fifteen online interviews with web designers in the USA, as key non-institutional actors at the meso-level who are developing what future websites will be like, this chapter analyzes ways in which identifying points of resistance and conflicting demands can lead to engagement with the debate over the online co-creation of value and more sustainable future e-business planning. A number of points of resistance to the inclusion of more e-social atmospherics are identified, and the implications for web designers’ roles and web design planning are discussed along with the limitations of the study and potential future research for e-business studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the design and results of a task-based user study, based on Information Foraging Theory, on a novel user interaction framework - uInteract - for content-based image retrieval (CBIR). The framework includes a four-factor user interaction model and an interactive interface. The user study involves three focused evaluations, 12 simulated real life search tasks with different complexity levels, 12 comparative systems and 50 subjects. Information Foraging Theory is applied to the user study design and the quantitative data analysis. The systematic findings have not only shown how effective and easy to use the uInteract framework is, but also illustrate the value of Information Foraging Theory for interpreting user interaction with CBIR. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper proposes an ISE (Information goal, Search strategy, Evaluation threshold) user classification model based on Information Foraging Theory for understanding user interaction with content-based image retrieval (CBIR). The proposed model is verified by a multiple linear regression analysis based on 50 users' interaction features collected from a task-based user study of interactive CBIR systems. To our best knowledge, this is the first principled user classification model in CBIR verified by a formal and systematic qualitative analysis of extensive user interaction data. Copyright 2010 ACM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to bridge the “Semantic gap”, a number of relevance feedback (RF) mechanisms have been applied to content-based image retrieval (CBIR). However current RF techniques in most existing CBIR systems still lack satisfactory user interaction although some work has been done to improve the interaction as well as the search accuracy. In this paper, we propose a four-factor user interaction model and investigate its effects on CBIR by an empirical evaluation. Whilst the model was developed for our research purposes, we believe the model could be adapted to any content-based search system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an interactive content-based image retrieval framework—uInteract, for delivering a novel four-factor user interaction model visually. The four-factor user interaction model is an interactive relevance feedback mechanism that we proposed, aiming to improve the interaction between users and the CBIR system and in turn users overall search experience. In this paper, we present how the framework is developed to deliver the four-factor user interaction model, and how the visual interface is designed to support user interaction activities. From our preliminary user evaluation result on the ease of use and usefulness of the proposed framework, we have learnt what the users like about the framework and the aspects we could improve in future studies. Whilst the framework is developed for our research purposes, we believe the functionalities could be adapted to any content-based image search framework.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The projected decline in fossil fuel availability, environmental concerns, and security of supply attract increased interest in renewable energy derived from biomass. Fast pyrolysis is a possible thermochemical conversion route for the production of bio-oil, with promising advantages. The purpose of the experiments reported in this thesis was to extend our understanding of the fast pyrolysis process for straw, perennial grasses and hardwoods, and the implications of selective pyrolysis, crop harvest and storage on the thermal decomposition products. To this end, characterisation and laboratory-scale fast pyrolysis were conducted on the available feedstocks, and their products were compared. The variation in light and medium volatile decomposition products was investigated at different pyrolysis temperatures and heating rates, and a comparison of fast and slow pyrolysis products was conducted. Feedstocks from different harvests, storage durations and locations were characterised and compared in terms of their fuel and chemical properties. A range of analytical (e.g. Py-GC-MS and TGA) and processing equipment (0.3 kg/h and 1.0 kg/h fast pyrolysis reactors and 0.15 kg slow pyrolysis reactor) was used. Findings show that the high bio-oil and char heating value, and low water content of willow short rotation coppice (SRC) make this crop attractive for fast pyrolysis processing compared to the other investigated feedstocks in this project. From the analytical sequential investigation of willow SRC, it was found that the volatile product distribution can be tailored to achieve a better final product, by a variation of the heating rate and temperature. Time of harvest was most influential on the fuel properties of miscanthus; overall the late harvest produced the best fuel properties (high HHV, low moisture content, high volatile content, low ash content), and storage of the feedstock reduced the moisture and acid content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large number of studies have been devoted to modeling the contents and interactions between users on Twitter. In this paper, we propose a method inspired from Social Role Theory (SRT), which assumes that a user behaves differently in different roles in the generation process of Twitter content. We consider the two most distinctive social roles on Twitter: originator and propagator, who respectively posts original messages and retweets or forwards the messages from others. In addition, we also consider role-specific social interactions, especially implicit interactions between users who share some common interests. All the above elements are integrated into a novel regularized topic model. We evaluate the proposed method on real Twitter data. The results show that our method is more effective than the existing ones which do not distinguish social roles. Copyright 2013 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.