900 resultados para Google Analytics
Resumo:
Abstract OBJECTIVE: Those with mental illness are at increased risk of physical health problems. The current study aimed to examine the information available online to the Australian public about the increased risk and consequences of physical illness in those with mental health problems and the services available to address these co-morbidities. METHODS: A structured online search was conducted with the search engine Google Australia (www.google.com.au) using generic search terms 'mental health information Australia', 'mental illness information Australia', 'depression', 'anxiety', and 'psychosis'. The direct content of websites was examined for information on the physical co-morbidities of mental illness. All external links on high-profile websites [the first five websites retrieved under each search term (n = 25)] were examined for information pertaining to physical health. RESULTS: Only 4.2% of websites informing the public about mental health contained direct content information about the increased risk of physical co-morbidities. The Australian Government's Department of Health and Ageing site did not contain any information. Of the high-profile websites, 62% had external links to resources about physical health and 55% had recommendations or resources for physical health. Most recommendations were generic. CONCLUSIONS: Relative to the seriousness of this problem, there is a paucity of information available to the public about the increased physical health risks associated with mental illness. Improved public awareness is the starting point of addressing this health inequity.
Resumo:
The Internet of Things facilitates the identification, digitization, and control of physical objects. However, it is the availability of cost effective sensors, mobile smart devices, scalable cloud infrastructure, and advanced analytics that have consumerized the Internet of Things. The accessibility of digital representations of things has transformative potential and provides entire new affordances for organizations and their ecosystems across most industries.
Resumo:
"The collection contributes to transnational whiteness debates through theoretically informed readings of historical and contemporary texts by established and emerging scholars in the field of critical whiteness studies. From a wide range of disciplinary perspectives, the book traces continuity and change in the cultural production of white virtue within texts, from the proud colonial moment through to neoliberalism and the global war on terror in the twenty-first century. Read together, these chapters convey a complex understanding of how transnational whiteness travels and manifests itself within different political and cultural contexts. Some chapters address political, legal and constitutional aspects of whiteness while others explore media representations and popular cultural texts and practices. The book also contains valuable historical studies documenting how whiteness is insinuated within the texts produced, circulated and reproduced in specific cultural and national locations."--Google eBook
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Objective The 2010–2011 Queensland floods resulted in the most deaths from a single flood event in Australia since 1916. This article analyses the information on these deaths for comparison with those from previous floods in modern Australia in an attempt to identify factors that have contributed to those deaths. Haddon's Matrix, originally designed for prevention of road trauma, offers a framework for understanding the interplay between contributing factors and helps facilitate a clearer understanding of the varied strategies required to ensure people's safety for particular flood types. Methods Public reports and flood relevant literature were searched using key words ‘flood’, ‘fatality’, ‘mortality’, ‘death’, ‘injury’ and ‘victim’ through Google Scholar, PubMed, ProQuest and EBSCO. Data relating to reported deaths during the 2010–2011 Queensland floods, and relevant data of previous Australian flood fatality (1997–2009) were collected from these available sources. These sources were also used to identify contributing factors. Results There were 33 deaths directly attributed to the event, of which 54.5% were swept away in a flash flood on 10 January 2011. A further 15.1% of fatalities were caused by inappropriate behaviours. This is different to floods in modern Australia where over 90% of deaths are related to the choices made by individuals. There is no single reason why people drown in floods, but rather a complex interplay of factors. Conclusions The present study and its integration of research findings and conceptual frameworks might assist governments and communities to develop policies and strategies to prevent flood injury and fatalities.
Resumo:
Talk of Big Data seems to be everywhere. Indeed, the apparently value-free concept of ‘data’ has seen a spectacular broadening of popular interest, shifting from the dry terminology of labcoat-wearing scientists to the buzzword du jour of marketers. In the business world, data is increasingly framed as an economic asset of critical importance, a commodity on a par with scarce natural resources (Backaitis, 2012; Rotella, 2012). It is social media that has most visibly brought the Big Data moment to media and communication studies, and beyond it, to the social sciences and humanities. Social media data is one of the most important areas of the rapidly growing data market (Manovich, 2012; Steele, 2011). Massive valuations are attached to companies that directly collect and profit from social media data, such as Facebook and Twitter, as well as to resellers and analytics companies like Gnip and DataSift. The expectation attached to the business models of these companies is that their privileged access to data and the resulting valuable insights into the minds of consumers and voters will make them irreplaceable in the future. Analysts and consultants argue that advanced statistical techniques will allow the detection of ongoing communicative events (natural disasters, political uprisings) and the reliable prediction of future ones (electoral choices, consumption)...
Resumo:
Part travelogue, part flight of fancy, this paper recounts a coastline stroll from Maroubra Beach to Bondi in Sydney’s eastern suburbs. The author as ‘travel guide’ points out features of potential interest to two visiting criminological colleagues as they ‘pass by’ scenery of great beauty shadowed by acts of spectacular violence. The everyday acts of walking and talking while passing through a ‘landscape’ serve to constitute a criminology of everyday life, illustrating the way in which a consciousness of crime, crime sites, analyses and theories permeates the ways a ‘tourist trail’ might be experienced and seen, myths made and histories forged. The walk starts with the unseen lines of penal force radiating from Long Bay Gaol, before skirting through surfing and its regulation; the ‘brotherhood’ of the BRA Boys; the Hines killing and the politics of self defence; the shark arm case, the Virgin Mary and the Bali bombing memorial at Coogee; zones of the beach and Jock Young’s Vertigo at Bronte and Tamarama; before finishing at the Marks Park ‘badlands’ at Bondi, scene of a series of mostly unsolved and unpunished homophobic killings, giving rise to reflections on ‘ungrievable lives’, memory, mourning and forgetting.
Resumo:
The Australian Law Reform Commission’s Final Report, Copyright and the Digital Economy, recommends the introduction of a flexible fair use provision. In doing so, it has sought to develop a technology-neutral approach to copyright that is adaptive to new technologies and which promotes innovation.
Resumo:
Several websites utilise a rule-base recommendation system, which generates choices based on a series of questionnaires, for recommending products to users. This approach has a high risk of customer attrition and the bottleneck is the questionnaire set. If the questioning process is too long, complex or tedious; users are most likely to quit the questionnaire before a product is recommended to them. If the questioning process is short; the user intensions cannot be gathered. The commonly used feature selection methods do not provide a satisfactory solution. We propose a novel process combining clustering, decisions tree and association rule mining for a group-oriented question reduction process. The question set is reduced according to common properties that are shared by a specific group of users. When applied on a real-world website, the proposed combined method outperforms the methods where the reduction of question is done only by using association rule mining or only by observing distribution within the group.
Resumo:
Twitter is the focus of much research attention, both in traditional academic circles and in commercial market and media research, as analytics give increasing insight into the performance of the platform in areas as diverse as political communication, crisis management, television audiencing and other industries. While methods for tracking Twitter keywords and hashtags have developed apace and are well documented, the make-up of the Twitter user base and its evolution over time have been less understood to date. Recent research efforts have taken advantage of functionality provided by Twitter's Application Programming Interface to develop methodologies to extract information that allows us to understand the growth of Twitter, its geographic spread and the processes by which particular Twitter users have attracted followers. From politicians to sporting teams, and from YouTube personalities to reality television stars, this technique enables us to gain an understanding of what prompts users to follow others on Twitter. This article outlines how we came upon this approach, describes the method we adopted to produce accession graphs and discusses their use in Twitter research. It also addresses the wider ethical implications of social network analytics, particularly in the context of a detailed study of the Twitter user base.
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
Failures on rolling element bearings usually originate from cracks that are detectable even in their early stage of propogation by properly analyzing vibration signals measured in the proximity of the bearing. Due to micro-slipping in the roller-races contact, damage-induced vibration signals belong to the family of quasi-periodic signals with a strong second order cyclostationary component. Cyclic coherence and its integrated form are widely considered as the most suitable tools for bearing fault diagnostics and their theoretical bases have been already consolidated. This paper presents how to correctly set the parameters of the cyclostationary analysis tool to be implemented in an automatable algorithm. In the first part of the paper some general guidelines are provided for the specific application. These considerations are further verified, applying cyclostationary tools to data collected in an experimental campaign on a specific test-rig.
Resumo:
This article analyses ‘performance government’ as an emergent form of rule in advanced liberal democracies. It discloses how teachers and school leaders in Australia are being governed by the practices of performance government which centre on the recently established Australian Institute for Teaching and School Leadership (AITSL) and are given direction by two major strategies implicit within the exercise of this form of power: activation and regulation. Through an ‘analytics of government’ of these practices, the article unravels the new configurations of corporatized expert and academic knowledge—and their attendant methods of application—by which the self-governing capacities of teachers and school leaders are being activated and regulated in ways that seek to optimize the performance of these professionals. The article concludes by outlining some of the dangers of performance government for the professional freedom of educators and school leaders.
Resumo:
This tutorial primarily focuses on the technical challenges surrounding the design and implementation of Accountable-eHealth (AeH) systems. The potential benefits of shared eHealth records systems are promising for the future of improved healthcare; however, their uptake is hindered by concerns over the privacy and security of patient information. In the current eHealth environment, there are competing requirements between healthcare consumers' (i.e. patients) requirements and healthcare professionals' requirements. While consumers want control over their information, healthcare professionals want access to as much information as required in order to make well informed decisions. This conflict is evident in the review of Australia's PCEHR system. Accountable-eHealth systems aim to balance these concerns by implementing Information Accountability (IA) mechanisms. AeH systems create an eHealth environment where health information is available to the right person at the right time without rigid barriers whilst empowering the consumers with information control and transparency, thus, enabling the creation of shared eHealth records that can be useful to both patients and HCPs. In this half-day tutorial, we will discuss and describe the technical challenges surrounding the implementation of AeH systems and the solutions we have devised. A prototype AeH system will be used to demonstrate the functionality of AeH systems, and illustrate some of the proposed solutions. The topics that will be covered include: designing for usability in AeH systems, the privacy and security of audit mechanisms, providing for diversity of users, the scalability of AeH systems, and finally the challenges of enabling research and Big Data Analytics on shared eHealth Records while ensuring accountability and privacy are maintained.
Resumo:
Building healthcare resilience is an important step towards creating more resilient communities to better cope with future disasters. To date, however, there appears to be little literature on how the concept of healthcare resilience should be defined and operationalized with a conceptual framework. This article aims to build a comprehensive healthcare disaster management approach guided by the concept of resilience. Methods: Google and major health electronic databases were searched to retrieve critical relevant publications. A total of 61 related publications were included, to provide a comprehensive overview of theories and definitions relevant to disaster resilience. Results and Discussions: Resilience is an inherent and adaptive capacity to cope with future uncertainty, through multiple strategies with all hazards approaches, in an attempt to achieve a positive outcome with linkage and cooperation. Healthcare resilience can be defined as the capability of healthcare organisations to resist, absorb, and respond to the shock of disasters while maintaining the most essential functions, then recover to their original state or adapt to a new state. It can be assessed by criteria, namely: robustness, redundancy, resourcefulness; and a complex of key dimensions, namely: vulnerability and safety, disaster resources and preparedness, continuity of essential health services, recovery and adaptation. Conclusions: This new concept places healthcare organisations’ disaster capabilities, management tasks, activities and disaster outcomes together into a comprehensive whole view, using an integrated approach and establishing achievable goals.