264 resultados para social media data
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.
Resumo:
The use of the internet for political purposes is not new; however, the introduction of social media tools has opened new avenues for political activists. In an era where social media has been credited as playing a critical role in the success of revolutions (Earl & Kimport, 2011; Papic & Noonan, 2011; Wooley, Limperos & 10 Beth, 2010), governments, law enforcement and intelligence agencies need to develop a deeper understanding of the broader capabilities of this emerging social and political environment. This can be achieved by increasing their online presence and through the application of proactive social media strategies to identify and manage potential threats. Analysis of current literature shows a gap 15 in the research regarding the connection between the theoretical understanding and practical implications of social media when exploited by political activists,and the efficacy of existing strategies designed to manage this growing challenge. This paper explores these issues by looking specifically at the use of three popular social media tools: Facebook; Twitter; and YouTube. Through the examination of 20 recent political protests in Iran, the UK and Egypt from 2009�2011, these case studies and research in the use of the three social media tools by political groups, the authors discuss inherent weaknesses in online political movements and discuss strategies for law enforcement and intelligence agencies to monitor these activities.
Resumo:
Our paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users.
Resumo:
With the explosion of Web 2.0 application such as blogs, social and professional networks, and various other types of social media, the rich online information and various new sources of knowledge flood users and hence pose a great challenge in terms of information overload. It is critical to use intelligent agent software systems to assist users in finding the right information from an abundance of Web data. Recommender systems can help users deal with information overload problem efficiently by suggesting items (e.g., information and products) that match users’ personal interests. The recommender technology has been successfully employed in many applications such as recommending films, music, books, etc. The purpose of this report is to give an overview of existing technologies for building personalized recommender systems in social networking environment, to propose a research direction for addressing user profiling and cold start problems by exploiting user-generated content newly available in Web 2.0.
Resumo:
Young adults represent the largest group of first time donors to the Australian Red Cross Blood Service, but they are also the least loyal group and often do not return after their first donation. At the same time, many young people use the internet and various forms of social media on a daily basis. Web and mobile based technological practices and communication patterns change the way that young people interact with one another, with their families, and communities. Combining these two points of departure, this study seeks to identify best practices of employing mobile apps and social media in order to enhance the loyalty rates of young blood donors. The findings reported in this paper are based on a qualitative approach presenting a nuanced understanding of the different factors that motivate young people to donate blood in the first place, as well as the obstacles or issues that prevent them from returning. The paper discusses work in progress with a view to inform the development of interactive prototypes trialling three categories of features: personal services (such as scheduling); social media (such as sharing the donation experience with friends to raise awareness); and data visualisations (such as local blood inventory levels). We discuss our translation of research findings into design implications.
Resumo:
Now as in earlier periods of acute change in the media environment, new disciplinary articulations are producing new methods for media and communication research. At the same time, established media and communication studies meth- ods are being recombined, reconfigured, and remediated alongside their objects of study. This special issue of JOBEM seeks to explore the conceptual, political, and practical aspects of emerging methods for digital media research. It does so at the conjuncture of a number of important contemporary trends: the rise of a ‘‘third wave’’ of the Digital Humanities and the ‘‘computational turn’’ (Berry, 2011) associated with natively digital objects and the methods for studying them; the apparently ubiquitous Big Data paradigm—with its various manifestations across academia, business, and government — that brings with it a rapidly increasing interest in social media communication and online ‘‘behavior’’ from the ‘‘hard’’ sciences; along with the multisited, embodied, and emplaced nature of everyday digital media practice.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
Collaboration between faculty and librarians is an important topic of discussion and research among academic librarians. These partnerships between faculty and librarians are vital for enabling students to become lifelong learners through their information literacy education. This research developed an understanding of academic collaborators by analyzing a community college faculty's teaching social networks. A teaching social network, an original term generated in this study, is comprised of communications that influence faculty when they design and deliver their courses. The communication may be formal (e.g., through scholarly journals and professional development activities) and informal (e.g., through personal communication) through their network elements. Examples of the elements of a teaching social network may be department faculty, administration, librarians, professional development, and students. This research asked 'What is the nature of faculty's teaching social networks and what are the implications for librarians?' This study moves forward the existing research on collaboration, information literacy, and social network analysis. It provides both faculty and librarians with added insight into their existing and potential relationships. This research was undertaken using mixed methods. Social network analysis was the quantitative data collection methodology and the interview method was the qualitative technique. For the social network analysis data, a survey was sent to full-time faculty at Las Positas College, a community college, in California. The survey gathered the data and described the teaching social networks for faculty with respect to their teaching methods and content taught. Semi-structured interviews were conducted following the survey with a sub-set of survey respondents to understand why specific elements were included in their teaching social networks and to learn of ways for librarians to become an integral part of the teaching social networks. The majority of the faculty respondents were moderately influenced by the elements of their network except the majority of the potentials were weakly influenced by the elements in their network in their content taught. The elements with the most influence on both teaching methods and content taught were students, department faculty, professional development, and former graduate professors and coursework. The elements with the least influence on both aspects were public or academic librarians, and social media. The most popular roles for the elements were conversations about teaching, sharing ideas, tips for teaching, insights into teaching, suggestions for ways of teaching, and how to engage students. Librarians' weakly influenced faculty in their teaching methods and their content taught. The motivating factors for collaboration with librarians were that students learned how to research, students' research projects improved, faculty saved time by having librarians provide the instruction to students, and faculty built strong working relationships with librarians. The challenges of collaborating with librarians were inadequate teaching techniques used when librarians taught research orientations and lack of time. Ways librarians can be more integral in faculty's teaching social networks included: more workshops for faculty, more proactive interaction with faculty, and more one-on-one training sessions for faculty. Some of the recommendations for the librarians from this study were develop a strong rapport with faculty, librarians should build their services in information literacy from the point of view of the faculty instead of from the librarian perspective, use staff development funding to attend conferences and workshops to improve their teaching, develop more training sessions for faculty, increase marketing efforts of the librarian's instructional services, and seek grant opportunities to increase funding for the library. In addition, librarians and faculty should review the definitions of information literacy and move from a skills based interpretation to a learning process.
Resumo:
Large communities built around social media on the Internet offer an opportunity to augment analytical customer relationship management (CRM) strategies. The purpose of this paper is to provide direction to advance the conceptual design of business intelligence (BI) systems for implementing CRM strategies. After introducing social CRM and social BI as emerging fields of research, the authors match CRM strategies with a re-engineered conceptual data model of Facebook in order to illustrate the strategic value of these data. Subsequently, the authors design a multi-dimensional data model for social BI and demonstrate its applicability by designing management reports in a retail scenario. Building on the service blueprinting framework, the authors propose a structured research agenda for the emerging field of social BI.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this project involved an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry. Working with data provided by Hoodlum Entertainment and Foxtel Marketing, the outcome of the study was a prototype for a custom data visualisation tool that allowed access, manipulation and presentation of user engagement data, both historic and predictive. The prototyped interfaces demonstrate how the visualization tool would collect and organise data specific to multiplatform projects by aggregating data across a number of platform reporting tools. Such a tool is designed to encompass not only platforms developed by the transmedia producer but also sites developed by fans. This visualisation tool accounted for multiplatform experience projects whose top level is comprised of people, platforms and content. People include characters, actors, audience, distributors and creators. Platforms include television, Facebook and other relevant social networks, literature, cinema and other media that might be included in the multiplatform experience. Content refers to discreet media texts employed within the platform, such as tweet, a You Tube video, a Facebook post, an email, a television episode, etc. Core content is produced by the creators’ multiplatform experiences to advance the narrative, while complimentary content generated by audience members offers further contributions to the experience. Equally important is the timing with which the components of the experience are introduced and how they interact with and impact upon each other. Being able to combine, filter and sort these elements in multiple ways we can better understand the value of certain components of a project. It also offers insights into the relationship between the timing of the release of components and user activity associated with them, which further highlights the efficacy (or, indeed, failure) of assets as catalysts for engagement. In collaboration with Hoodlum we have developed a number of design scenarios experimenting with the ways in which data can be visualised and manipulated to tell a more refined story about the value of user engagement with certain project components and activities. This experimentation will serve as the basis for future research.
Resumo:
Public health research consistently demonstrates the salience of neighbourhood as a determinant of both health-related behaviours and outcomes across the human life course. This paper will report on the findings from a mixed-methods Brisbane-based study that explores how mothers with primary school children from both high and low socioeconomic suburbs use the local urban environment for the purpose of physical activity. Firstly, we demonstrate findings from an innovative methodology using the geographic information systems (GIS) embedded in social media platforms on mobile phones to track locations, resource-use, distances travelled, and modes of transport of the families in real-time; and secondly, we report on qualitative data that provides insight into reasons for differential use of the environment by both groups. Spatial/mapping and statistical data showed that while the mothers from both groups demonstrated similar daily routines, the mothers from the high SEP suburb engaged in increased levels of physical activity, travelled less frequently and less distance by car, and walked more for transport. The qualitative data revealed differences in the psychosocial processes and characteristics of the households and neighbourhoods of the respective groups, with mothers in the lower SEP suburb reporting more stress, higher conflict, and lower quality relationships with neighbours.
Resumo:
With the growing size and variety of social media files on the web, it’s becoming critical to efficiently organize them into clusters for further processing. This paper presents a novel scalable constrained document clustering method that harnesses the power of search engines capable of dealing with large text data. Instead of calculating distance between the documents and all of the clusters’ centroids, a neighborhood of best cluster candidates is chosen using a document ranking scheme. To make the method faster and less memory dependable, the in-memory and in-database processing are combined in a semi-incremental manner. This method has been extensively tested in the social event detection application. Empirical analysis shows that the proposed method is efficient both in computation and memory usage while producing notable accuracy.
Resumo:
It might still sound strange to dedicate an entire journal issue exclusively to a single internet platform. But it is not the company Twitter Inc. that draws our attention; this issue is not about a platform and its features and services. It is about its users and the ways in which they interact with one another via the platform, about the situations that motivate people to share their thoughts publicly, using Twitter as a means to reach out to one another. And it is about the digital traces people leave behind when interacting with Twitter, and most of all about the ways in which these traces – as a new type of research data – can also enable new types of research questions and insights.
Resumo:
Metaphors are a common instrument of human cognition, activated when seeking to make sense of novel and abstract phenomena. In this article we assess some of the values and assumptions encoded in the framing of the term big data, drawing on the framework of conceptual metaphor. We first discuss the terms data and big data and the meanings historically attached to them by different usage communities and then proceed with a discourse analysis of Internet news items about big data. We conclude by characterizing two recurrent framings of the concept: as a natural force to be controlled and as a resource to be consumed.