56 resultados para predictive accuracy

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To describe an alternate approach for the calculation of sensitivity and specificity when analyzing the accuracy of screening tools, which can be used when standard calculations may be inappropriate. SensitivityER (ER denoting event rate) is the number of events correctly predicted, divided by the total number of events. SpecificityER is the amount of time that study participants are predicted to be event negative, divided by the total amount of participant observed time. Variance estimates for these statistics are constructed by bootstrap resampling, taking into account event dependence.

Methods: Standard and alternate approaches for calculating sensitivity and specificity were applied to hospital falls risk screening tool data. In this application, the outcome of interest was a recurrent event, there were multiple applications of the screening tool, delays in screening tool  completion, and patients' follow-up durations were unequal.

Results:
Application of sensitivityER and specificityER to this data not only provided a clearer description of the screening tool's overall accuracy, but also allowed examination of accuracy over time, accuracy in predicting specific event numbers, and evaluation of the added value that screening tool reapplications may have.

Conclusion: SensitivityER and specificityER provide a valuable approach to screening tool evaluation in the clinical setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Fall risk screening tools are frequently used as a part of falls prevention programs in hospitals. Design-related bias in evaluations of tool predictive accuracy could lead to overoptimistic results, which would then contribute to program failure in practice.

Methods:
A systematic review was undertaken. Two blind reviewers assessed the methodology of relevant publications into a four-point classification system adapted from multiple sources. The association between study design classification and reported results was examined using linear regression with clustering based on screening tool and robust variance estimates with point estimates of Youden Index (= sensitivity + specificity - 1) as the dependent variable. Meta-analysis was then performed pooling data from prospective studies.

Results: Thirty-five publications met inclusion criteria, containing 51 evaluations of fall risk screening tools. Twenty evaluations were classified as retrospective validation evaluations, 11 as prospective (temporal) validation evaluations, and 20 as prospective (external) validation evaluations. Retrospective evaluations had significantly higher Youden Indices (point estimate [95% confidence interval]: 0.22 [0.11, 0.33]). Pooled Youden Indices from prospective evaluations demonstrated the STRATIFY, Morse Falls Scale, and nursing staff clinical judgment to have comparable accuracy.

Discussion: Practitioners should exercise caution in comparing validity of fall risk assessment tools where the evaluation has been limited to retrospective classifications of methodology. Heterogeneity between studies indicates that the Morse Falls Scale and STRATIFY may still be useful in particular settings, but that widespread adoption of either is unlikely to generate benefits significantly greater than that of nursing staff clinical judgment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To outline the development, structure, data assumptions, and application of an Australian economic model for stroke (Model of Resource Utilization, Costs, and Outcomes for Stroke [MORUCOS]). Methods: The model has a linked spreadsheet format with four modules to describe the disease burden and treatment pathways, estimate prevalence-based and incidence-based costs, and derive life expectancy and quality of life consequences. The model uses patient-level, community-based, stroke cohort data and macro-level simulations. An interventions module allows options for change to be consistently evaluated by modifying aspects of the other modules. To date, model validation has included sensitivity testing, face validity, and peer review. Further validation of technical and predictive accuracy is needed. The generic pathway model was assessed by comparison with a stroke subtypes (ischemic, hemorrhagic, or undetermined) approach and used to determine the relative cost-effectiveness of four interventions. Results: The generic pathway model produced lower costs compared with a subtypes version (total average first-year costs/case AUD$15,117 versus AUD$17,786, respectively). Optimal evidence-based uptake of anticoagulation therapy for primary and secondary stroke prevention and intravenous thrombolytic therapy within 3 hours of stroke were more cost-effective than current practice (base year, 1997). Conclusions: MORUCOS is transparent and flexible in describing Australian stroke care and can effectively be used to systematically evaluate a range of different interventions. Adjusting results to account for stroke subtypes, as they influence cost estimates, could enhance the generic model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ObjectivesRisk assessments provided to judicial decision makers as a part of the current generation of legislation for protecting the public from sexual offenders can have a profound impact on the rights of individual offenders. This article will identify some of the human rights issues inherent in using the current assessment procedures to formulate and communicate risk as a forensic expert in cases involving civil commitment, preventive detention, extended supervision, or special conditions of parole. MethodBased on the current professional literature and applied experience in legal proceedings under community protection laws in the United States and New Zealand, potential threats to the rights of offenders are identified. Central to these considerations are issues of the accuracy of current risk assessment measures, communicating the findings of risk assessment appropriately to the court, and the availability of competent forensic mental health professionals in carrying out these functions. The role of the forensic expert is discussed in light of the competing demands of protecting individual human rights and community protection. ConclusionActuarial risk assessment represents the best practice for informing judicial decision makers in cases involving sex offenders, yet these measures currently demonstrate substantial limitations in predictive accuracy when applied to individual offenders. These limitations must be clearly articulated when reporting risk assessment findings. Sufficient risk assessment expertise should be available to provide a balanced application of community protection laws.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the development of a funding mechanism, the Student Resource Index, which has been designed to resolve a number of difficulties which emerged following the introduction of integration or inclusion as an alternative means of providing educational support to students with disabilities in the Australian State of Victoria. Prior to 1984, the year in which the major integration or inclusion initiatives were introduced, the great majority of students with disabilities were educated in segregated special schools, however, by 1992 the integration initiatives had been successful in including within regular classes approximately half of the students in receipt of additional educational assistance on the basis of disability. The success of the integration program brought with it a number of administrative and financial problems which were the subject of three government enquiries. Central to these difficulties was the development of a dual system of special education provision. On one hand, additional resources were provided for the students attending segregated special schools by means of weighted student ratios, with one teacher being provided for each six students attending a special school. On the other hand, the requirements of individual students integrated into regular schools were assessed by school-based committees on the basis of their perceived extra educational needs. The major criticism of this dual system of special education funding was that it created inequities in the distribution of resources both between the systems and also within the systems. For example, three students with equivalent needs, one of whom attended a special school and two of whom attended different regular schools could each be funded at substantially differing levels. The solution to these inequities of funding was seen to be in the development of a needs based funding device which encompassed all students in receipt of additional disability related educational support. The Student Resource Index developed in this thesis is a set of behavioural descriptors designed to assess degree of additional educational need across a number of disability domains. These domains include hearing, vision, communication, health, co-ordination (manual and mobility), intellectual capacity and behaviour. The completed Student Resource Index provides a profile of the students’ needs across all of these domains and as such addresses the multiple nature of many disabling conditions. The Student Resource Index was validated in terms of its capacity to predict the ‘known’ membership or the type of special school which some 1200 students in the sample currently attended. The decision to use the existing special school populations as the criterion against which the Student Resource Index was validated was based on the premise that the differing resource levels of these schools had been historically determined by expert opinion, industrial negotiation and reference to other special education systems as the most reliable estimate of the enrolled students’ needs. When discriminant function analysis was applied to some 178 students attending one school for students with mild intellectual disability and one facility for students with moderate to severe intellectual disability the Student Resource Index was successful in predicting the student's known school in 92 percent of cases. An analysis of those students (8 percent) which the Student Resource Index had failed to predict their known school enrolment revealed that 13 students had, for a variety of reasons, been inappropriately placed in these settings. When these students were removed from the sample the predictive accuracy of the Student Resource Index was raised to 96 percent of the sample. By comparison the domains of the Vineland Adaptive Behaviour Scale accurately predicted known enrolments of 76 percent of the sample. By way of replication discriminant function analysis was then applied to the Student Resource Index profiles of 518 students attending Day Special Schools (Mild Intellectual Disability) and 287 students attending Special Developmental Schools (Moderate to Severe Intellectual Disability). In this case, the Student Resource Index profiles were successful in predicting the known enrolments of 85 percent of students. When a third group was added, 147 students attending Day Special Schools for students with physical disabilities, the Student Resource Index predicted known enrolments in 80 percent of cases. The addition of a fourth group of 116 students attending Day Special Schools (Hearing Impaired) to the discriminant analysis led to a small reduction in predictive accuracy from 80 percent to 78 percent of the sample. A final analysis which included students attending a School for the Deaf-Blind, a Hospital School and a Social and Behavioural Unit was successful in predicting known enrolments in 71 percent of the 1114 students in the sample. For reasons which are expanded upon within the thesis it was concluded that the Student Resource Index when used in conjunction with discriminant function analysis was capable of isolating four distinct groups on the basis of their additional educational needs. If the historically determined and varied funding levels provided to these groups, inherent in the cash equivalent of the staffing ratios of Day Special Schools (Mild Intellectual Disability), Special Development Schools (Moderate to Severe Intellectual Disability), Day Special Schools (Physical Disability) and Day Special Schools (Hearing Impairment) are accepted as reasonable reflections of these students’ needs these funding levels can be translated into funding bands. These funding bands can then be applied to students in segregated or inclusive placements. The thesis demonstrates that a new applicant for funding can be introduced into the existing data base and by the use of discriminant function analysis be allocated to one of the four groups. The analysis is in effect saying that this new student’s profile of educational needs has more in common with Group A than with the members of Groups B, C, or D. The student would then be funded at Group A level. It is immaterial from a funding point of view whether the student decides to attend a segregated or inclusive setting. The thesis then examines the impact of the introduction of Student Resource Index based funding upon the current funding of the special schools in one of the major metropolitan regions. Overall, such an initiative would lead to a reduction of 1.54 percent of the total funding accruing to the region’s special schools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents Relation Based Modelling as an extension to the Feature Based Modelling approach to student modelling. Relation Based Modelling dynamically creates new terms allowing the instructional designer to specify a set of primitives and operators from which the modelling system will create the necessary elements. Focal modelling is a new technique devised to manipulate and coordinate the addition of new terms. The thesis presents an evaluation of student modelling systems based on predictive accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article reports findings from a series of empirical studies investigating whether poor release planning might contribute to sex offender recidivism. A coding protocol was developed to measure the comprehensiveness of release planning which included items relating to accommodation, employment, pro-social support, community based treatment, and the Good Lives Model (T. Ward & C.A. Stewart, 2003) secondary goods. The protocol was retrospectively applied to groups of recidivist and non recidivist child molesters, matched on static risk level and time since release. As predicted, overall release planning was significantly poorer for recidivists compared to non recidivists. The accommodation, employment, and social support items combined to best predict recidivism, with predictive accuracy comparable to that obtained using static risk models. Results highlighted the importance of release planning in efforts to reduce sex offender recidivism. Implications for policy makers and community members are briefly discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to put forward an innovative approach for reducing the variation between Type I and Type II errors in the context of ratio-based modeling of corporate collapse, without compromising the accuracy of the predictive model. Its contribution to the literature lies in resolving the problematic trade-off between predictive accuracy and variations between the two types of errors.

Design/methodology/approach – The methodological approach in this paper – called MCCCRA – utilizes a novel multi-classification matrix based on a combination of correlation and regression analysis, with the former being subject to optimisation criteria. In order to ascertain its accuracy in signaling collapse, MCCCRA is empirically tested against multiple discriminant analysis (MDA).

Findings –
Based on a data sample of 899 US publicly listed companies, the empirical results indicate that in addition to a high level of accuracy in signaling collapse, MCCCRA generates lower variability between Type I and Type II errors when compared to MDA.

Originality/value –
Although correlation and regression analysis are long-standing statistical tools, the optimisation constraints that are applied to the correlations are unique. Moreover, the multi-classification matrix is a first in signaling collapse. By providing economic insight into more stable financial modeling, these innovations make an original contribution to the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research question: A major barrier to retaining existing customers is the difficulty in knowing who is most at risk of leaving (or ‘churning’). Given the strategic and financial importance of season ticket holders (STH) to professional sport teams, this paper examines the effectiveness of a range of variables in identifying the STH who are most likely to churn.
Research methods: A longitudinal field study was undertaken to reflect actual conditions. Survey data of a professional sport team STH were collected prior to the conclusion of the season. Actual renewal data were then tracked from team records the following season. This work was replicated across five professional sport teams from the Australian Football League, with renewal predictions made and tracked for over 10,000 STH.
Results and findings: The results suggest that the ‘Juster’ Scale – a simple, one-item purchase probability measure – is an effective identifier of those most at risk of churning, more than 3 months in advance. When combined with ticket utilization and tenure measures, predictive accuracy improves markedly, to the point where these three measures can be used to provide an effective early warning system for managers.
Implications: Whilst there is a tendency to view STH as highly loyal, these data reinforce the importance of actively managing all customers to reduce churn. Despite their commitment, STH do churn, but those most likely to can be predicted by examining their patterns of behaviour in the current season. Efforts to retain STH need to shift their focus from transactional value assessments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The superior characteristics of high photon flux and diffraction-limited spatial resolution achieved by synchrotron-FTIR microspectroscopy allowed molecular characterization of individual live thraustochytrids. Principal component analysis revealed distinct separation of the single live cell spectra into their corresponding strains, comprised of new Australasian thraustochytrids (AMCQS5-5 and S7) and standard cultures (AH-2 and S31). Unsupervised hierarchical cluster analysis (UHCA) indicated close similarities between S7 and AH-7 strains, with AMCQS5-5 being distinctly different. UHCA correlation conformed well to the fatty acid profiles, indicating the type of fatty acids as a critical factor in chemotaxonomic discrimination of these thraustochytrids and also revealing the distinctively high polyunsaturated fatty acid content as key identity of AMCQS5-5. Partial least squares discriminant analysis using cross-validation approach between two replicate datasets was demonstrated to be a powerful classification method leading to models of high robustness and 100% predictive accuracy for strain identification. The results emphasized the exceptional S-FTIR capability to perform real-time in vivo measurement of single live cells directly within their original medium, providing unique information on cell variability among the population of each isolate and evidence of spontaneous lipid peroxidation that could lead to deeper understanding of lipid production and oxidation in thraustochytrids for single-cell oil development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This book offers a way to predict which brand a buyer will purchase. It looks at brandperformance within a product category and tests it in different countries with verydifferent cultures. Following the Predictive Brand Choice (PBC) model, this book seeks to predict a consumer’s loyalty and choice. Results have shown that PBC can achieve a high level of predictive accuracy, in excess of 70% in mature markets. This accuracy holds even in the face of price competition from a less preferred brand.PBC uses a prospective predicting method which does not have to rely on a brand’spast performance or a customer’s purchase history for prediction. Choice data isgathered in the retail setting – at the point of sale. The Strategy of Global Brandingand Brand Equity presents survey data and quantitative analyses that prove themethod described to be practical, useful and implementable for both researchers and practitioners of commercial brand strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

 Aim: The purpose of this study was to create predictive species distribution models (SDMs) for temperate reef-associated fish species densities and fish assemblage diversity and richness to aid in marine conservation and spatial planning. Location: California, USA. Methods: Using generalized additive models, we associated fish species densities and assemblage characteristics with seafloor structure, giant kelp biomass and wave climate and used these associations to predict the distribution and assemblage structure across the study area. We tested the accuracy of these predicted extrapolations using an independent data set. The SDMs were also used to estimate larger scale abundances to compare with other estimates of species abundance (uniform density extrapolation over rocky reef and density extrapolations taking into account variations in geomorphic structure). Results: The SDMs successfully modelled the species-habitat relationships of seven rocky reef-associated fish species and showed that species' densities differed in their relationships with environmental variables. The predictive accuracy of the SDMs ranged from 0.26 to 0.60 (Pearson's r correlation between observed and predicted density values). The SDMs created for the fish assemblage-level variables had higher prediction accuracies with Pearson's r values of 0.61 for diversity and 0.71 for richness. The comparisons of the different methods for extrapolating species densities over a single marine protected area varied greatly in their abundance estimates with the uniform extrapolation (density values extrapolated evenly over the rocky reef) always estimating much greater abundances. The other two methods, which took into account variation in the geomorphic structure of the reef, provided much lower abundance estimates. Main conclusions: Species distribution models that combine geomorphic, oceanographic and biogenic habitat variables can reliably predict spatial patterns of species density and assemblage attributes of temperate reef fishes at spatial scales of 50 m. Thus, SDMs show great promise for informing spatial and ecosystem-based approaches to conservation and fisheries management. © 2015 John Wiley