93 resultados para analytics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As technological capabilities for capturing, aggregating, and processing large quantities of data continue to improve, the question becomes how to effectively utilise these resources. Whenever automatic methods fail, it is necessary to rely on human background knowledge, intuition, and deliberation. This creates demand for data exploration interfaces that support the analytical process, allowing users to absorb and derive knowledge from data. Such interfaces have historically been designed for experts. However, existing research has shown promise in involving a broader range of users that act as citizen scientists, placing high demands in terms of usability. Visualisation is one of the most effective analytical tools for humans to process abstract information. Our research focuses on the development of interfaces to support collaborative, community-led inquiry into data, which we refer to as Participatory Data Analytics. The development of data exploration interfaces to support independent investigations by local communities around topics of their interest presents a unique set of challenges, which we discuss in this paper. We present our preliminary work towards suitable high-level abstractions and interaction concepts to allow users to construct and tailor visualisations to their own needs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes the Clinical Pathway Analysis Method (CPAM) approach that enables the extraction of valuable organisational and medical information on past clinical pathway executions from the event logs of healthcare information systems. The method deals with the complexity of real-world clinical pathways by introducing a perspective-based segmentation of the date-stamped event log. CPAM enables the clinical pathway analyst to effectively and efficiently acquire a profound insight into the clinical pathways. By comparing the specific medical conditions of patients with the factors used for characterising the different clinical pathway variants, the medical expert can identify the best therapeutic option. Process mining-based analytics enables the acquisition of valuable insights into clinical pathways, based on the complete audit traces of previous clinical pathway instances. Additionally, the methodology is suited to assess guideline compliance and analyse adverse events. Finally, the methodology provides support for eliciting tacit knowledge and providing treatment selection assistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour ‘acoustic-fingerprint’ which shows some preliminary promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data and predictive analytics have received significant attention from the media and academic literature throughout the past few years, and it is likely that these emerging technologies will materially impact the mining sector. This short communication argues, however, that these technological forces will probably unfold differently in the mining industry than they have in many other sectors because of significant differences in the marginal cost of data capture and storage. To this end, we offer a brief overview of what Big Data and predictive analytics are, and explain how they are bringing about changes in a broad range of sectors. We discuss the “N=all” approach to data collection being promoted by many consultants and technology vendors in the marketplace but, by considering the economic and technical realities of data acquisition and storage, we then explain why a “n « all” data collection strategy probably makes more sense for the mining sector. Finally, towards shaping the industry’s policies with regards to technology-related investments in this area, we conclude by putting forward a conceptual model for leveraging Big Data tools and analytical techniques that is a more appropriate fit for the mining sector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is the second in a two-part series that maps continuities and ruptures in conceptions of power and traces their effects in educational discourse on 'the child'. It delineates two post-Newtonian intellectual trajectories through which concepts of 'power' arrived at the theorization of 'the child': the paradoxical bio-physical inscriptions of human-ness that accompanied mechanistic worldviews and the explanations for social motion in political philosophy. The intersection of pedagogical theories with 'the child' and 'power' is further traced from the latter 1800s to the present, where a Foucaultian analytics of power-as-effects is reconsidered in regard to histories of motion. The analysis culminates in an examination of post-Newtonian (dis)continuities in the theorization of power, suggesting some productive paradoxes that inhabit turn of the 21st-century conceptualizations of the social.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"In Perpetual Motion is an "historical choreography" of power, pedagogy, and the child from the 1600s to the early 1900s. It breaks new ground by historicizing the analytics of power and motion that have interpenetrated renditions of the young. Through a detailed examination of the works of John Locke, Jean-Jacques Rousseau, Johann Herbart, and G. Stanley Hall, this book maps the discursive shifts through which the child was given a unique nature, inscribed in relation to reason, imbued with an effectible interiority, and subjected to theories of power and motion. The book illustrates how developmentalist visions took hold in U.S. public school debates. It documents how particular theories of power became submerged and taken for granted as essences inside the human subject. In Perpetual Motion studiously challenges views of power as in or of the gaze, tracing how different analytics of power have been used to theorize what gazing could notice."--BOOK JACKET.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Rereading the historical record indicates that it is no longer so easy to argue that history is simply prior to its forms. Since the mid-1990s a new wave of research has formed around wider debates in the humanities and social sciences, such as decentering the subject, new analytics of power, reconsideration of one-dimensional time and three-dimensional space, attention to beyond-archival sources, alterity, Otherness, the invisible, and more. In addition, broader and contradictory impulses around the question of the nation - transnational, post-national, proto-national, and neo-national movements – have unearthed a new series of problematics and focused scholarly attention on traveling discourses, national imaginaries, and less formal processes of socialization, bonding, and subjectification. New Curriculum History challenges prior occlusions in the field, building upon and departing from previous waves of scholarship, extending the focus beyond the insularity of public schooling, the traditional framework of the self-contained nation-state, and the psychology of the schooled individual. Drawing on global studies, historical sociology, postcolonial studies, critical race theory, visual culture theory, disability studies, psychoanalytics, Cambridge school structuralisms, poststructuralisms, and infra- and transnational approaches the volume holds together not despite but because of differences and incommensurabilities in rereading historical records. Audience: Scholars and students in curriculum studies, history, education, philosophy, and cultural studies will be interested in these chapters for their methodological range, their innovations and their deterritorializations."--publisher website

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter imports Michel Callon’s model of the ‘hybrid forum’ (Callon et al, 2009, p. 18) into social media research, arguing that certain kinds of hashtag publics can be mapped onto this model. It explores this idea of the hashtag as hybrid forum through the worked example of #agchatoz—a hashtag used as both ‘meetup’ organizer for Australian farmers and other stakeholders in Australian agriculture, and as a topic marker for general discussion of related issues. Applying the principles and techniques of digital methods (Rogers, 2013), we employ a standard suite of analytics to a longitudinal dataset of #agchatoz tweets. The results are used not only to describe various elements and dynamics of this hashtag, but also to experiment with the articulation of such approaches with the theoretical model of the hybrid forum, as well as exploring the ways that controversies animate and transform such forums as part of the emergence and cross-pollination of issue publics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social media analytics is a rapidly developing field of research at present: new, powerful ‘big data’ research methods draw on the Application Programming Interfaces (APIs) of social media platforms. Twitter has proven to be a particularly productive space for such methods development, initially due to the explicit support and encouragement of Twitter, Inc. However, because of the growing commercialisation of Twitter data, and the increasing API restrictions imposed by Twitter, Inc., researchers are now facing a considerably less welcoming environment, and are forced to find additional funding for paid data access, or to bend or break the rules of the Twitter API. This article considers the increasingly precarious nature of ‘big data’ Twitter research, and flags the potential consequences of this shift for academic scholarship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic recordings play an increasingly important role in monitoring terrestrial and aquatic environments. However, rapid advances in technology make it possible to accumulate thousands of hours of recordings, more than ecologists can ever listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings on multiple scales, from minutes, hours, days to years. The visualization should facilitate navigation and yield ecologically meaningful information prior to listening to the audio. To construct images, we calculate acoustic indices, statistics that describe the distribution of acoustic energy and reflect content of ecological interest. We combine various indices to produce false-color spectrogram images that reveal acoustic content and facilitate navigation. The technical challenge we investigate in this work is how to navigate recordings that are days or even months in duration. We introduce a method of zooming through multiple temporal scales, analogous to Google Maps. However, the “landscape” to be navigated is not geographical and not therefore intrinsically visual, but rather a graphical representation of the underlying audio. We describe solutions to navigating spectrograms that range over three orders of magnitude of temporal scale. We make three sets of observations: 1. We determine that at least ten intermediate scale steps are required to zoom over three orders of magnitude of temporal scale; 2. We determine that three different visual representations are required to cover the range of temporal scales; 3. We present a solution to the problem of maintaining visual continuity when stepping between different visual representations. Finally, we demonstrate the utility of the approach with four case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The past decade has seen a rapid change in the climate system with an increased risk of extreme weather events. On and following the 3rd of January 2013, Tasmania experienced three catastrophic bushfires, which led to the evacuation of several communities, the loss of many properties, and a financial cost of approximately AUD$80 million. Objective To explore the impacts of the 2012/2013 Tasmanian bushfires on community pharmacies. Method Qualitative research methods were undertaken, employing semi-structured telephone interviews with a purposive sample of seven Tasmanian pharmacists. The interviews were recorded and transcribed, and two different methods were used to analyse the text. The first method utilised Leximancer® text analytics software to provide a birds-eye view of the conceptual structure of the text. The second method involved manual, open and axial coding, conducted independently by the two researchers for inter-rater reliability, to identify key themes in the discourse. Results Two main themes were identified - ‘people’ and ‘supply’ - from which six key concepts were derived. The six concepts were ‘patients’, ‘pharmacists’, ‘local doctor’, ‘pharmacy operations’, ‘disaster management planning’, and ‘emergency supply regulation’. Conclusion This study identified challenges faced by community pharmacists during Tasmanian bushfires. Interviewees highlighted the need for both the Tasmanian State Government and the Australian Federal Government to recognise the important primary care role that community pharmacists play during natural disasters, and therefore involve pharmacists in disaster management planning. They called for greater support and guidance for community pharmacists from regulatory and other government bodies during these events. Their comments highlighted the need for a review of Tasmania’s 3-day emergency supply regulation that allows pharmacists to provide a three-day supply of a patient’s medication without a doctor’s prescription in an emergency situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present the results of an exploratory study that examined the problem of automating content analysis of student online discussion transcripts. We looked at the problem of coding discussion transcripts for the levels of cognitive presence, one of the three main constructs in the Community of Inquiry (CoI) model of distance education. Using Coh-Metrix and LIWC features, together with a set of custom features developed to capture discussion context, we developed a random forest classification system that achieved 70.3% classification accuracy and 0.63 Cohen's kappa, which is significantly higher than values reported in the previous studies. Besides improvement in classification accuracy, the developed system is also less sensitive to overfitting as it uses only 205 classification features, which is around 100 times less features than in similar systems based on bag-of-words features. We also provide an overview of the classification features most indicative of the different phases of cognitive presence that gives an additional insights into the nature of cognitive presence learning cycle. Overall, our results show great potential of the proposed approach, with an added benefit of providing further characterization of the cognitive presence coding scheme.