824 resultados para Irony and deception detection in social media
Resumo:
The use of online data is becoming increasingly essential for the generation of insight in today’s research environment. This reflects the much wider range of data available online and the key role that social media now plays in interpersonal communication. However, the process of gaining permission to use social media data for research purposes creates a number of significant issues when considering compatibility with professional ethics guidelines. This paper critically explores the application of existing informed consent policies to social media research and compares with the form of consent gained by the social networks themselves, which we label ‘uninformed consent’. We argue that, as currently constructed, informed consent carries assumptions about the nature of privacy that are not consistent with the way that consumers behave in an online environment. On the other hand, uninformed consent relies on asymmetric relationships that are unlikely to succeed in an environment based on co-creation of value. The paper highlights the ethical ambiguity created by current approaches for gaining customer consent, and proposes a new conceptual framework based on participative consent that allows for greater alignment between consumer privacy and ethical concerns.
Resumo:
DIANA es un proyecto coordinado en el que participan el grupo de Ingeniería del Lenguaje Natural y Reconocimiento de Formas (ELiRF) de la Universitat Politècnica de València y el grupo Centre de Llenguatge i Computació (CLiC) de la Universitat de Barcelona. Se trata de un proyecto del programa de I+D (TIN2012-38603) financiado por el Ministerio de Economía y Competitividad. Paolo Rosso coordina el proyecto DIANA y lidera el subproyecto DIANA-Applications y M. Antònia Martí lidera el subproyecto DIANA-Constructions.
Resumo:
While organizations strive to leverage the vast information generated daily from social media platforms, and decision makers are keen to identify and exploit its value, the quality of this information remains uncertain. Past research on information quality criteria and evaluation issues in social media is largely disparate, incomparable and lacking any common theoretical basis. In attention to this gap, this study adapts existing guidelines and exemplars of construct conceptualization in information systems research, to deductively define information quality and related criteria in the social media context. Building on a notion of information derived from semiotic theory, this paper suggests a general conceptualization of information quality in the social media context that can be used in future research to develop more context specific conceptual models.
Resumo:
We introduce ReDites, a system for realtime event detection, tracking, monitoring and visualisation. It is designed to assist Information Analysts in understanding and exploring complex events as they unfold in the world. Events are automatically detected from the Twitter stream. Then those that are categorised as being security-relevant are tracked, geolocated, summarised and visualised for the end-user. Furthermore, the system tracks changes in emotions over events, signalling possible flashpoints or abatement. We demonstrate the capabilities of ReDites using an extended use case from the September 2013 Westgate shooting incident. Through an evaluation of system latencies, we also show that enriched events are made available for users to explore within seconds of that event occurring.
Resumo:
Content marketing refers to marketing format that involves the creation and sharing of media and publishing content in order to acquire customers. It is focused not on selling, but on communicating with customers and prospects. In today world´s, a trend has been seen in brands becoming publishers in order to keep up with their competition and more importantly to keep their base of fans and followers. Content Marketing is making companies to engage consumers by publishing engaging and value-filled content. This study aims to investigate if there is a link between brand engagement and Facebook Content Marketing practices in the e-commerce industry in Brazil. Based on the literature review, this study defines brand engagement on Facebook as the numbers of "likes" "comments" and "shares" that a company receives from its fans. These actions reflect the popularity of the brand post and leads to engagement. The author defines a scale where levels of Content Marketing practices are developed in order to analyze brand posts on Facebook of an ecommerce company in Brazil. The findings reveal that the most important criterion for the company is the one regarding the picture of the post, where it examines whether the photo content is appealing to the audience. Moreover, it was perceived that the higher the level of these criterion in a post, the greater the number of likes, comments and shares the post receives. The time when a post is published does not present a significant role in determining customer engagement and the most important factor within a publication is to reach the maximum level in the Content Marketing Scale.
Resumo:
Los medios sociales han revolucionado la manera en la que los consumidores se relacionan entre sí y con las marcas. Las opiniones publicadas en dichos medios tienen un poder de influencia en las decisiones de compra tan importante como las campañas de publicidad. En consecuencia, los profesionales del marketing cada vez dedican mayores esfuerzos e inversión a la obtención de indicadores que permitan medir el estado de salud de las marcas a partir de los contenidos digitales generados por sus consumidores. Dada la naturaleza no estructurada de los contenidos publicados en los medios sociales, la tecnología usada para procesar dichos contenidos ha menudo implementa técnicas de Inteligencia Artificial, tales como algoritmos de procesamiento de lenguaje natural, aprendizaje automático y análisis semántico. Esta tesis, contribuye al estado de la cuestión, con un modelo que permite estructurar e integrar la información publicada en medios sociales, y una serie de técnicas cuyos objetivos son la identificación de consumidores, así como la segmentación psicográfica y sociodemográfica de los mismos. La técnica de identificación de consumidores se basa en la huella digital de los dispositivos que utilizan para navegar por la Web y es tolerante a los cambios que se producen con frecuencia en dicha huella digital. Las técnicas de segmentación psicográfica descritas obtienen la posición en el embudo de compra de los consumidores y permiten clasificar las opiniones en función de una serie de atributos de marketing. Finalmente, las técnicas de segmentación sociodemográfica permiten obtener el lugar de residencia y el género de los consumidores. ABSTRACT Social media has revolutionised the way in which consumers relate to each other and with brands. The opinions published in social media have a power of influencing purchase decisions as important as advertising campaigns. Consequently, marketers are increasing efforts and investments for obtaining indicators to measure brand health from the digital content generated by consumers. Given the unstructured nature of social media contents, the technology used for processing such contents often implements Artificial Intelligence techniques, such as natural language processing, machine learning and semantic analysis algorithms. This thesis contributes to the State of the Art, with a model for structuring and integrating the information posted on social media, and a number of techniques whose objectives are the identification of consumers, as well as their socio-demographic and psychographic segmentation. The consumer identification technique is based on the fingerprint of the devices they use to surf the Web and is tolerant to the changes that occur frequently in such fingerprint. The psychographic profiling techniques described infer the position of consumer in the purchase funnel, and allow to classify the opinions based on a series of marketing attributes. Finally, the socio-demographic profiling techniques allow to obtain the residence and gender of consumers.
Resumo:
Social streams have proven to be the mostup-to-date and inclusive information on cur-rent events. In this paper we propose a novelprobabilistic modelling framework, called violence detection model (VDM), which enables the identification of text containing violent content and extraction of violence-related topics over social media data. The proposed VDM model does not require any labeled corpora for training, instead, it only needs the in-corporation of word prior knowledge which captures whether a word indicates violence or not. We propose a novel approach of deriving word prior knowledge using the relative entropy measurement of words based on the in-tuition that low entropy words are indicative of semantically coherent topics and therefore more informative, while high entropy words indicates words whose usage is more topical diverse and therefore less informative. Our proposed VDM model has been evaluated on the TREC Microblog 2011 dataset to identify topics related to violence. Experimental results show that deriving word priors using our proposed relative entropy method is more effective than the widely-used information gain method. Moreover, VDM gives higher violence classification results and produces more coherent violence-related topics compared toa few competitive baselines.
Resumo:
Topic classification (TC) of short text messages offers an effective and fast way to reveal events happening around the world ranging from those related to Disaster (e.g. Sandy hurricane) to those related to Violence (e.g. Egypt revolution). Previous approaches to TC have mostly focused on exploiting individual knowledge sources (KS) (e.g. DBpedia or Freebase) without considering the graph structures that surround concepts present in KSs when detecting the topics of Tweets. In this paper we introduce a novel approach for harnessing such graph structures from multiple linked KSs, by: (i) building a conceptual representation of the KSs, (ii) leveraging contextual information about concepts by exploiting semantic concept graphs, and (iii) providing a principled way for the combination of KSs. Experiments evaluating our TC classifier in the context of Violence detection (VD) and Emergency Responses (ER) show promising results that significantly outperform various baseline models including an approach using a single KS without linked data and an approach using only Tweets. Copyright 2013 ACM.
Resumo:
Uncertainty text detection is important to many social-media-based applications since more and more users utilize social media platforms (e.g., Twitter, Facebook, etc.) as information source to produce or derive interpretations based on them. However, existing uncertainty cues are ineffective in social media context because of its specific characteristics. In this paper, we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets. We then conduct experiments on the generated tweets corpus to study the effectiveness of different types of features for uncertainty text identification. © 2013 Association for Computational Linguistics.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014
Resumo:
This paper outlines a method for studying online activity using both qualitative and quantitative methods: topical network analysis. A topical network refers to "the collection of sites commenting on a particular event or issue, and the links between them" (Highfield, Kirchhoff, & Nicolai, 2011, p. 341). The approach is a complement for the analysis of large datasets enabling the examination and comparison of different discussions as a means of improving our understanding of the uses of social media and other forms of online communication. Developed for an analysis of political blogging, the method also has wider applications for other social media websites such as Twitter.
Resumo:
Social Media (SM) is increasingly being integrated with business information in decision making. Unique characteristics of social media (e.g. wide accessibility, permanence, global audience, recentness, and ease of use) raise new issues with information quality (IQ); quite different from traditional considerations of IQ in information systems (IS) evaluation. This paper presents a preliminary conceptual model of information quality in social media (IQnSM) derived through directed content analysis and employing characteristics of analytic theory in the study protocol. Based in the notion of ‘fitness for use’, IQnSM is highly use and user centric and is defined as “the degree to which information is suitable for doing a specified task by a specific user, in a certain context”. IQnSM is operationalised as hierarchical, formed by the three dimensions (18 measures): intrinsic quality, contextual quality and representational quality. A research plan for empirically validating the model is proposed.
Resumo:
With the widespread of social media websites in the internet, and the huge number of users participating and generating infinite number of contents in these websites, the need for personalisation increases dramatically to become a necessity. One of the major issues in personalisation is building users’ profiles, which depend on many elements; such as the used data, the application domain they aim to serve, the representation method and the construction methodology. Recently, this area of research has been a focus for many researchers, and hence, the proposed methods are increasing very quickly. This survey aims to discuss the available user modelling techniques for social media websites, and to highlight the weakness and strength of these methods and to provide a vision for future work in user modelling in social media websites.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.