991 resultados para digitization, statistics, Google Analytics
Resumo:
The South Australian Supreme Court this week found that Google is legally responsible when its search results link to defamatory content on the web. In this long-running case, Dr Janice Duffy has been trying for more than six years to clear her name and remove links to defamatory material when people search for her using Google. The main culprit is the US based website Ripoff Reports, where people have posted negative reviews of Dr Duffy. Under United States law, defamation is very hard to prove, and US websites are not liable for comments made by their users. Since it was not possible to get harmful or abusive comments removed from the source, Dr Duffy instead asked Google to remove the links from its search results. Google removed some of these links, but only from its Australian domain (google.com.au), and it left many of them active. This latest court decision is a big win for Dr Duffy. The court found that once Google was alerted to the defamatory material, it was then under an obligation to act to censor its search results and prevent further harm to Dr Duffy’s reputation.
Resumo:
Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.
Resumo:
Climate variability and change are risk factors for climate sensitive activities such as agriculture. Managing these risks requires "climate knowledge", i.e. a sound understanding of causes and consequences of climate variability and knowledge of potential management options that are suitable in light of the climatic risks posed. Often such information about prognostic variables (e.g. yield, rainfall, run-off) is provided in probabilistic terms (e.g. via cumulative distribution functions, CDF), whereby the quantitative assessments of these alternative management options is based on such CDFs. Sound statistical approaches are needed in order to assess whether difference between such CDFs are intrinsic features of systems dynamics or chance events (i.e. quantifying evidences against an appropriate null hypothesis). Statistical procedures that rely on such a hypothesis testing framework are referred to as "inferential statistics" in contrast to descriptive statistics (e.g. mean, median, variance of population samples, skill scores). Here we report on the extension of some of the existing inferential techniques that provides more relevant and adequate information for decision making under uncertainty.
Resumo:
The National Health Interview Survey - Disability supplement (NHIS-D) provides information that can be used to understand myriad topics related to health and disability. The survey provides comprehensive information on multiple disability conceptualizations that can be identified using information about health conditions (both physical and mental), activity limitations, and service receipt (e.g. SSI, SSDI, Vocational Rehabilitation). This provides flexibility for researchers in defining populations of interest. This paper provides a description of the data available in the NHIS-D and information on how the data can be used to better understand the lives of people with disabilities.
Resumo:
In 2008, a collaborative partnership between Google and academia launched the Google Online Marketing Challenge (hereinafter Google Challenge), perhaps the world’s largest in-class competition for higher education students. In just two years, almost 20,000 students from 58 countries participated in the Google Challenge. The Challenge gives undergraduate and graduate students hands-on experience with the world’s fastest growing advertising mechanism, search engine advertising. Funded by Google, students develop an advertising campaign for a small to medium sized enterprise and manage the campaign over three consecutive weeks using the Google AdWords platform. This article explores the Challenge as an innovative pedagogical tool for marketing educators. Based on the experiences of three instructors in Australia, Canada and the United States, this case study discusses the opportunities and challenges of integrating this dynamic problem-based learning approach into the classroom.
Resumo:
Management of the commercial harvest of kangaroos relies on quotas set annually as a proportion of regular estimates of population size. Surveys to generate these estimates are expensive and, in the larger states, logistically difficult; a cheaper alternative is desirable. Rainfall is a disappointingly poor predictor of kangaroo rate of increase in many areas, but harvest statistics (sex ratio, carcass weight, skin size and animals shot per unit time) potentially offer cost-effective indirect monitoring of population abundance (and therefore trend) and status (i.e. under-or overharvest). Furthermore, because harvest data are collected continuously and throughout the harvested areas, they offer the promise of more intensive and more representative coverage of harvest areas than aerial surveys do. To be useful, harvest statistics would need to have a close and known relationship with either population size or harvest rate. We assessed this using longterm (11-22 years) data for three kangaroo species (Macropus rufus, M. giganteus and M. fuliginosus) and common wallaroos (M. robustus) across South Australia, New South Wales and Queensland. Regional variation in kangaroo body size, population composition, shooter efficiency and selectivity required separate analyses in different regions. Two approaches were taken. First, monthly harvest statistics were modelled as a function of a number of explanatory variables, including kangaroo density, harvest rate and rainfall. Second, density and harvest rate were modelled as a function of harvest statistics. Both approaches incorporated a correlated error structure. Many but not all regions had relationships with sufficient precision to be useful for indirect monitoring. However, there was no single relationship that could be applied across an entire state or across species. Combined with rainfall-driven population models and applied at a regional level, these relationships could be used to reduce the frequency of aerial surveys without compromising decisions about harvest management.
Resumo:
The simultaneous state and parameter estimation problem for a linear discrete-time system with unknown noise statistics is treated as a large-scale optimization problem. The a posterioriprobability density function is maximized directly with respect to the states and parameters subject to the constraint of the system dynamics. The resulting optimization problem is too large for any of the standard non-linear programming techniques and hence an hierarchical optimization approach is proposed. It turns out that the states can be computed at the first levelfor given noise and system parameters. These, in turn, are to be modified at the second level.The states are to be computed from a large system of linear equations and two solution methods are considered for solving these equations, limiting the horizon to a suitable length. The resulting algorithm is a filter-smoother, suitable for off-line as well as on-line state estimation for given noise and system parameters. The second level problem is split up into two, one for modifying the noise statistics and the other for modifying the system parameters. An adaptive relaxation technique is proposed for modifying the noise statistics and a modified Gauss-Newton technique is used to adjust the system parameters.
Resumo:
A very general and numerically quite robust algorithm has been proposed by Sastry and Gauvrit (1980) for system identification. The present paper takes it up and examines its performance on a real test example. The example considered is the lateral dynamics of an aircraft. This is used as a vehicle for demonstrating the performance of various aspects of the algorithm in several possible modes.
Resumo:
The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.
Resumo:
Bringing a social interaction approach to children’s geographies to investigate how children accomplish place in everyday lives, we draw on ethnomethodological and conversation analytic approaches that recognize children’s competence to manipulate their social and digital worlds. An investigation of preschool-aged children engaged with Google Earth™ shows how they both claimed and displayed technological understandings and practices such as maneuvering the mouse and screen, and referenced place through relationships with local landmarks and familiar settings such as their school. At times, the children’s competing agendas required orientation to each other’s ideas, and shared negotiation to come to resolution. A focus on children’s use of digital technologies as they make meaning of the world around them makes possible new understandings of place within the geographies of childhood and education.
Resumo:
An ongoing challenge for Learning Analytics research has been the scalable derivation of user interaction data from multiple technologies. The complexities associated with this challenge are increasing as educators embrace an ever growing number of social and content related technologies. The Experience API (xAPI) alongside the development of user specific record stores has been touted as a means to address this challenge, but a number of subtle considerations must be made when using xAPI in Learning Analytics. This paper provides a general overview to the complexities and challenges of using xAPI in a general systemic analytics solution - called the Connected Learning Analytics (CLA) toolkit. The importance of design is emphasised, as is the notion of common vocabularies and xAPI Recipes. Early decisions about vocabularies and structural relationships between statements can serve to either facilitate or handicap later analytics solutions. The CLA toolkit case study provides us with a way of examining both the strengths and the weaknesses of the current xAPI specification, and we conclude with a proposal for how xAPI might be improved by using JSON-LD to formalise Recipes in a machine readable form.
Resumo:
This demonstration introduces the Connected Learning Analytics (CLA) Toolkit. The CLA toolkit harvests data about student participation in specified learning activities across standard social media environments, and presents information about the nature and quality of the learning interactions.
Resumo:
This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.
Resumo:
- BACKGROUND Chronic diseases are increasing worldwide and have become a significant burden to those affected by those diseases. Disease-specific education programs have demonstrated improved outcomes, although people do forget information quickly or memorize it incorrectly. The teach-back method was introduced in an attempt to reinforce education to patients. To date, the evidence regarding the effectiveness of health education employing the teach-back method in improved care has not yet been reviewed systematically. - OBJECTIVES This systematic review examined the evidence on using the teach-back method in health education programs for improving adherence and self-management of people with chronic disease. - INCLUSION CRITERIA Types of participants: Adults aged 18 years and over with one or more than one chronic disease. Types of intervention: All types of interventions which included the teach-back method in an education program for people with chronic diseases. The comparator was chronic disease education programs that did not involve the teach-back method. Types of studies: Randomized and non-randomized controlled trials, cohort studies, before-after studies and case-control studies. Types of outcomes: The outcomes of interest were adherence, self-management, disease-specific knowledge, readmission, knowledge retention, self-efficacy and quality of life. - SEARCH STRATEGY Searches were conducted in CINAHL, MEDLINE, EMBASE, Cochrane CENTRAL, Web of Science, ProQuest Nursing and Allied Health Source, and Google Scholar databases. Search terms were combined by AND or OR in search strings. Reference lists of included articles were also searched for further potential references. - METHODOLOGICAL QUALITY Two reviewers conducted quality appraisal of papers using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument. - DATA EXTRACTION Data were extracted using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument data extraction instruments. - DATA SYNTHESIS There was significant heterogeneity in selected studies, hence a meta-analysis was not possible and the results were presented in narrative form. - RESULTS Of the 21 articles retrieved in full, 12 on the use of the teach-back method met the inclusion criteria and were selected for analysis. Four studies confirmed improved disease-specific knowledge in intervention participants. One study showed a statistically significant improvement in adherence to medication and diet among type 2 diabetics patients in the intervention group compared to the control group (p < 0.001). Two studies found statistically significant improvements in self-efficacy (p = 0.0026 and p < 0.001) in the intervention groups. One study examined quality of life in heart failure patients but the results did not improve from the intervention (p = 0.59). Five studies found a reduction in readmission rates and hospitalization but these were not always statistically significant. Two studies showed improvement in daily weighing among heart failure participants, and in adherence to diet, exercise and foot care among those with type 2 diabetes. - CONCLUSION Overall, the teach-back method showed positive effects in a wide range of health care outcomes although these were not always statistically significant. Studies in this systematic review revealed improved outcomes in disease-specific knowledge, adherence, self-efficacy and the inhaler technique. There was a positive but inconsistent trend also seen in improved self-care and reduction of hospital readmission rates. There was limited evidence on improvement in quality of life or disease related knowledge retention.