25 resultados para Tweet contextualization
Resumo:
Building on innovative frameworks for analysing and visualising the tweet data available from Twitter, developed by the authors, this paper examines the patterns of tweeting activity that occurred during and after the Feb, 2011 Christchurch earthquake. Local and global responses to the disaster were organised around the pre-existing hashtag #eqnz, which averaged some 100 tweets per minute in the hours following the earthquake. The paper identifies the key contributors to the #eqnz network and shows the key themes of their messages. Emerging from this analysis is a more detailed understanding of Twitter and other social media as key elements in the overall ecology of the media forms used for crisis communication. Such uses point both to the importance of social media as a tool for affected communities to self-organise their disaster response and recovery activities, and as a tool for emergency management services to disseminate key information and receive updates from local communities.
Resumo:
Big data is big news in almost every sector including crisis communication. However, not everyone has access to big data and even if we have access to big data, we often do not have necessary tools to analyze and cross reference such a large data set. Therefore this paper looks at patterns in small data sets that we have ability to collect with our current tools to understand if we can find actionable information from what we already have. We have analyzed 164390 tweets collected during 2011 earthquake to find out what type of location specific information people mention in their tweet and when do they talk about that. Based on our analysis we find that even a small data set that has far less data than a big data set can be useful to find priority disaster specific areas quickly.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this project involved an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry. Working with data provided by Hoodlum Entertainment and Foxtel Marketing, the outcome of the study was a prototype for a custom data visualisation tool that allowed access, manipulation and presentation of user engagement data, both historic and predictive. The prototyped interfaces demonstrate how the visualization tool would collect and organise data specific to multiplatform projects by aggregating data across a number of platform reporting tools. Such a tool is designed to encompass not only platforms developed by the transmedia producer but also sites developed by fans. This visualisation tool accounted for multiplatform experience projects whose top level is comprised of people, platforms and content. People include characters, actors, audience, distributors and creators. Platforms include television, Facebook and other relevant social networks, literature, cinema and other media that might be included in the multiplatform experience. Content refers to discreet media texts employed within the platform, such as tweet, a You Tube video, a Facebook post, an email, a television episode, etc. Core content is produced by the creators’ multiplatform experiences to advance the narrative, while complimentary content generated by audience members offers further contributions to the experience. Equally important is the timing with which the components of the experience are introduced and how they interact with and impact upon each other. Being able to combine, filter and sort these elements in multiple ways we can better understand the value of certain components of a project. It also offers insights into the relationship between the timing of the release of components and user activity associated with them, which further highlights the efficacy (or, indeed, failure) of assets as catalysts for engagement. In collaboration with Hoodlum we have developed a number of design scenarios experimenting with the ways in which data can be visualised and manipulated to tell a more refined story about the value of user engagement with certain project components and activities. This experimentation will serve as the basis for future research.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
The occasional ArtsHub article asking spectators to show respect for stage by switching all devices off notwithstanding, in the last few years we have witnessed an clear push to make more use of social media as a means by which spectators might respond to a performance across most theatre companies. Mainstage companies, as well as contemporary companies are asking us to turn on, tune in and tweet our impressions of a show to them, to each other, and to the masses – sometimes during the show, sometimes after the show, and sometimes without having seen the show. In this paper, I investigate the relationship between theatre, spectatorship and social media, tracing the transition from print platforms in which expert critics were responsible for determining audience response to today’s online platforms in which everybody is responsible for debating responses. Is the tendency to invite spectators to comment via social media before, during, or after a show the advance in audience engagement, entertainment and empowerment many hail it to be? Is it a return to a more democratised past in which theatres were active, interactive and at times downright rowdy, and the word of the published critic had yet to take over from the word of the average punter? Is it delivering distinctive shifts in theatre and theatrical meaning making? Or is it simply a good way to get spectators to write about a work they are no longer watching? An advance in the marketing of the work rather than an advance in the active, interactive aesthetic of the work? In this paper, I consider what the performance of spectatorship on social media tells us about theatre, spectatorship and meaning-making. I use initial findings about the distinctive dramaturgies, conflicts and powerplays that characterise debates about performance and performance culture on social media to reflect on the potentially productive relationship between theatre, social media, spectatorship, and meaning making. I suggest that the distinctive patterns of engagement displayed on social media platforms – including, in many cases, remediation rather than translation, adaptation or transformation of prior engagement practices – have a lot to tell us about how spectators and spectator groups negotiate for the power to provide the dominant interpretation of a work.
Resumo:
Solving indeterminate algebraic equations in integers is a classic topic in the mathematics curricula across grades. At the undergraduate level, the study of solutions of non-linear equations of this kind can be motivated by the use of technology. This article shows how the unity of geometric contextualization and spreadsheet-based amplification of this topic can provide a discovery experience for prospective secondary teachers and information technology students. Such experience can be extended to include a transition from a computationally driven conjecturing to a formal proof based on a number of simple yet useful techniques.
Resumo:
This chapter focuses on teacher education for high-poverty schools in Australia and suggests that a contextualization of poverty is an important step in identifying solutions to the persistent gaps in how teachers are prepared to teach in schools where they can make a lasting difference. Understanding how poverty looks different between and within different countries provides a reminder of the complexities of disadvantage. Similarities exist within OECD countries; however, differences are also evident. This is something that initial teacher education (ITE) solutions need to take into account. While Australia has a history of initiatives designed to address teacher education for high-poverty schools, this chapter provides a particular snapshot of Australia’s National Exceptional Teachers for Disadvantaged Schools program (NETDS), a large-scale, national partnership between universities and Departments of Education, which is partially supported by philanthropic funding.
Resumo:
We investigate the extent and nature of use of use of twitter for financial reporting by ASX listed companies. We consider 199 financial information related tweets from 14 ASX listed companies’ Twitter accounts. A thematic analysis of these tweets shows ‘Earnings’ and ‘Operational Performance’ are the most discussed financial reporting themes. Further, a comparison across industry sectors reveals that listed companies from varies industries show different usage patterns of financial reporting on Twitter. The examination of tweet sentiments also indicates a reporting bias within these tweets, as listed companies are more willing to disclose positive financial reporting tweets.
Developing standardized methods to assess cost of healthy and unhealthy (current) diets in Australia
Resumo:
Unhealthy diets contribute at least 14% to Australia's disease burden and are driven by ‘obesogenic’ food environments. Compliance with dietary recommendations is particularly poor amongst disadvantaged populations including low socioeconomic groups, those living in rural/remote areas and Aboriginal and Torres Strait Islanders. The perception that healthy foods are expensive is a key barrier to healthy choices and a major determinant of diet-related health inequities. Available state/regional/local data (limited and non-comparable) suggests that, despite basic healthy foods not incurring GST, the cost of healthy food is higher and has increased more rapidly than unhealthy food over the last 15 years in Australia. However, there were no nationally standardised tools or protocols to benchmark, compare or monitor food prices and affordability in Australia. Globally, we are leading work to develop and test approaches to assess the price differential of healthy and less-healthy (current) diets under the food price module of the International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS). This presentation describes contextualization of the INFORMAS approach to develop standardised Australian tools, survey protocols and data collection and analysis systems. The ‘healthy diet basket’ was based on the Australian Foundation Diet, 1 The ‘current diet basket’ and specific items included in each basket, were based on recent national dietary survey data.2 Data collection methods were piloted. The final tools and protocols were then applied to measure the price and affordability of healthy and less healthy (current) diets of different household groups in diverse communities across the nation. We have compared results for different geographical locations/population subgroups in Australia and assessed these against international INFORMAS benchmarks. The results inform the development of policy and practice, including those relevant to mooted changes to the GST base, to promote nutrition and healthy weight and prevent chronic disease in Australia.
Resumo:
This paper provides a framework for understanding Twitter as a historical source. We address digital humanities scholars to enable the transfer of concepts from traditional source criticism to new media formats, and to encourage the preservation of Twitter as a cultural artifact. Twitter has established itself as a key social media platform which plays an important role in public, real-time conversation. Twitter is also unique as its content is being archived by a public institution (the Library of Congress). In this paper we will show that we still have to assume that much of the contextual information beyond the pure tweet texts is already lost, and propose additional objectives for preservation.