80 resultados para Big Pizza


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern health information systems can generate several exabytes of patient data, the so called "Health Big Data", per year. Many health managers and experts believe that with the data, it is possible to easily discover useful knowledge to improve health policies, increase patient safety and eliminate redundancies and unnecessary costs. The objective of this paper is to discuss the characteristics of Health Big Data as well as the challenges and solutions for health Big Data Analytics (BDA) – the process of extracting knowledge from sets of Health Big Data – and to design and evaluate a pipelined framework for use as a guideline/reference in health BDA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Design Minds The Big Picture Toolkit was one of six K7-12 secondary school design toolkits commissioned by the State Library of Queensland (SLQ) Asia Pacific Design Library (APDL), to facilitate the delivery of the Stage 1 launch of its Design Minds online platform (www.designminds.org.au) partnership initiative with Queensland Government Arts Queensland and the Smithsonian Cooper-Hewitt National Design Museum, on June 29, 2012. Design Minds toolkits are practical guides, underpinned by a combination of one to three of the Design Minds model phases of ‘Inquire’, ‘Ideate’ and ‘Implement’ (supported by at each stage with structured reflection), to enhance existing school curriculum and empower students with real life design exercises, within the classroom environment. Toolkits directly identify links to Naplan, National Curriculum, C2C and Professional Standards benchmarks, as well as the student capabilities of successful and creative 21st century citizens they seek to engender through design thinking. Inspired by the Unlimited: Designing for the Asia Pacific Generation Workshop 2010 (http://eprints.qut.edu.au/47762/), this toolkit explores, through three distinct exercises, ‘design for the other 90%’, addressing tools and approaches to diverse and changing social, cultural, technological and environmental challenges. The Design Minds The Big Picture Toolkit challenges students to be active agents for change and to think creatively and optimistically about solutions to future global issues that deliver social, economic and environmental benefits. More generally, it aims to facilitate awareness in young people, of the role of design in society and the value of design thinking skills in generating strategies to solve basic to complex systemic challenges, as well as to inspire post-secondary pathways and idea generation for education. The toolkit encourages students and teachers to develop sketching, making, communication, presentation and collaboration skills to improve their design process, as well as explore further inquiry (background research) to enhance the ideation exercises. Exercise 1 focuses on the ‘Inquire’ phase, Exercise 2 the ‘Inquire’ and ‘Ideate’ phases, and Exercise 3 concentrates on the ‘Implement’ phase. Depending on the intensity of the focus, the unit of work could be developed over a 4-5 week program (approximately 4-6 x 60 minute lessons/workshops) or as smaller workshops treated as discrete learning experiences. The toolkit is available for public download from http://designminds.org.au/the-big-picture/ on the Design Minds website.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban space has the potential to shape people's experience and understanding of the city and of the culture of a place. In some respects, murals and allied forms of wall art occupy the intersection of street art and public art; engaging, and sometimes, transforming the urban space in which they exist and those who use it. While murals are often conceived as a more ‘permanent’ form of painted art there has been a trend in recent years towards more deliberately transient forms of wall art such as washed-wall murals and reverse graffiti. These varying forms of public wall art are embedded within the fabric of the urban space and history. This paper will explore the intersection of public space, public art and public memory in a mural project in the Irish city of Cork. Focussing on the washed-wall murals of Cork's historic Shandon district, we explore the sympathetic and synergetic relationship of this wall art with the heritage architecture of the built environment and of the murals as an expression of and for the local community, past and present. Through the Shandon Big Wash Up murals we reflect on the function of participatory public art as an explicit act of urban citizenship which works to support community-led re-enchantment in the city through a reconnection with its past.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gel dosimetry and plastic chemical dosimeters such as PresageTM are capable of very accurately mapping dose distributions in three dimensions. Combined with their near tissue equivalence one would expect that after several decades of development they would be the dosimeter of choice for dosimetry, however they have not achieve widespread clinical use. This presentation will include a brief description and history of developments in gels and 3D plastics for dosimetry, the limitations and advantages, and their role in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big data is certainly the buzz term in executive networking circles at the moment. Heralded by management consultancies and research organisations alike as the next big thing in business efficiency, it is shooting up the Gartner hype cycle to the giddy heights of the peak of inflated expectations before it tumbles down in to the trough of disillusionment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One cannot help but be impressed by the inroads that digital oilfield technologies have made into the exploration and production (E&P) industry in the past decade. Today’s production systems can be monitored by “smart” sensors that allow engineers to observe almost any aspect of performance in real time. Our understanding of how reservoirs are behaving has improved considerably since the dawn of this revolution, and the industry has been able to move away from point answers to more holistic “big picture” integrated solutions. Indeed, the industry has already reaped the rewards of many of these kinds of investments. Many billions of dollars of value have been delivered by this heightened awareness of what is going on within our assets and the world around them (Van Den Berg et al. 2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metaphors are a common instrument of human cognition, activated when seeking to make sense of novel and abstract phenomena. In this article we assess some of the values and assumptions encoded in the framing of the term big data, drawing on the framework of conceptual metaphor. We first discuss the terms data and big data and the meanings historically attached to them by different usage communities and then proceed with a discourse analysis of Internet news items about big data. We conclude by characterizing two recurrent framings of the concept: as a natural force to be controlled and as a resource to be consumed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The basic principles and equations are developed for elementary finance, based on the concept of compound interest. The five quantities of interest in such problems are present value, future value, amount of periodic payment, number of periods and the rate of interest per period. We consider three distinct means of computing each of these five quantities in Excel 2007: (i) use of algebraic equations, (ii) by recursive schedule and the Goal Seek facility, and (iii) use of Excel's intrinsic financial functions. The paper is intended to be used as the basis for a lesson plan and contains many examples and solved problems. Comment is made regarding the relative difficulty of each approach, and a prominent theme is the systematic use of more than one method to increase student understanding and build confidence in the answer obtained. Full instructions to build each type of model are given and a complete set of examples and solutions may be downloaded (Examples.xlsx and Solutions.xlsx).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What does the future look like for music festivals in Australia? This article examines the decline of the large festivals that have grown to dominate the scene in Australia in the last twenty years, and the rise of small, specialized festivals that offer a boutique experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, increasing focus has been made on making good business decisions utilizing the product of data analysis. With the advent of the Big Data phenomenon, this is even more apparent than ever before. But the question is how can organizations trust decisions made on the basis of results obtained from analysis of untrusted data? Assurances and trust that data and datasets that inform these decisions have not been tainted by outside agency. This study will propose enabling the authentication of datasets specifically by the extension of the RESTful architectural scheme to include authentication parameters while operating within a larger holistic security framework architecture or model compliant to legislation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twitter ist eine besonders nützliche Quelle für Social-Media-Daten: mit dem Twitter-API (dem Application Programming Interface, das einen strukturierten Zugang zu Kommunikationsdaten in standardisierten Formaten bietet) ist es Forschern möglich, mit ein wenig Mühe und ausreichenden technische Ressourcen sehr große Archive öffentlich verbreiteter Tweets zu bestimmten Themen, Interessenbereichen, oder Veranstaltungen aufzubauen. Grundsätzlich liefert das API sehr langen Listen von Hunderten, Tausenden oder Millionen von Tweets und den Metadaten zu diesen Tweets; diese Daten können dann auf verschiedentlichste Weise extrahiert, kombiniert, und visualisiert werden, um die Dynamik der Social-Media-Kommunikation zu verstehen. Diese Forschung ist häufig um althergebrachte Fragestellungen herum aufgebaut, wird aber in der Regel in einem bislang unbekannt großen Maßstab durchgeführt. Die Projekte von Medien- und Kommunikationswissenschaftlern wie Papacharissi und de Fatima Oliveira (2012), Wood und Baughman (2012) oder Lotan et al. (2011) – um nur eine Handvoll der letzten Beispiele zu nennen – sind grundlegend auf Twitterdatensätze aufgebaut, die jetzt routinemäßig Millionen von Tweets und zugehörigen Metadaten umfassen, erfaßt nach einer Vielzahl von Kriterien. Was allen diesen Fällen gemein ist, ist jedoch die Notwendigkeit, neue methodische Wege in der Verarbeitung und Analyse derart großer Datensätze zur medienvermittelten sozialen Interaktion zu gehen.