982 resultados para predictive accuracy


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To outline the development, structure, data assumptions, and application of an Australian economic model for stroke (Model of Resource Utilization, Costs, and Outcomes for Stroke [MORUCOS]). Methods: The model has a linked spreadsheet format with four modules to describe the disease burden and treatment pathways, estimate prevalence-based and incidence-based costs, and derive life expectancy and quality of life consequences. The model uses patient-level, community-based, stroke cohort data and macro-level simulations. An interventions module allows options for change to be consistently evaluated by modifying aspects of the other modules. To date, model validation has included sensitivity testing, face validity, and peer review. Further validation of technical and predictive accuracy is needed. The generic pathway model was assessed by comparison with a stroke subtypes (ischemic, hemorrhagic, or undetermined) approach and used to determine the relative cost-effectiveness of four interventions. Results: The generic pathway model produced lower costs compared with a subtypes version (total average first-year costs/case AUD$15,117 versus AUD$17,786, respectively). Optimal evidence-based uptake of anticoagulation therapy for primary and secondary stroke prevention and intravenous thrombolytic therapy within 3 hours of stroke were more cost-effective than current practice (base year, 1997). Conclusions: MORUCOS is transparent and flexible in describing Australian stroke care and can effectively be used to systematically evaluate a range of different interventions. Adjusting results to account for stroke subtypes, as they influence cost estimates, could enhance the generic model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ObjectivesRisk assessments provided to judicial decision makers as a part of the current generation of legislation for protecting the public from sexual offenders can have a profound impact on the rights of individual offenders. This article will identify some of the human rights issues inherent in using the current assessment procedures to formulate and communicate risk as a forensic expert in cases involving civil commitment, preventive detention, extended supervision, or special conditions of parole. MethodBased on the current professional literature and applied experience in legal proceedings under community protection laws in the United States and New Zealand, potential threats to the rights of offenders are identified. Central to these considerations are issues of the accuracy of current risk assessment measures, communicating the findings of risk assessment appropriately to the court, and the availability of competent forensic mental health professionals in carrying out these functions. The role of the forensic expert is discussed in light of the competing demands of protecting individual human rights and community protection. ConclusionActuarial risk assessment represents the best practice for informing judicial decision makers in cases involving sex offenders, yet these measures currently demonstrate substantial limitations in predictive accuracy when applied to individual offenders. These limitations must be clearly articulated when reporting risk assessment findings. Sufficient risk assessment expertise should be available to provide a balanced application of community protection laws.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the development of a funding mechanism, the Student Resource Index, which has been designed to resolve a number of difficulties which emerged following the introduction of integration or inclusion as an alternative means of providing educational support to students with disabilities in the Australian State of Victoria. Prior to 1984, the year in which the major integration or inclusion initiatives were introduced, the great majority of students with disabilities were educated in segregated special schools, however, by 1992 the integration initiatives had been successful in including within regular classes approximately half of the students in receipt of additional educational assistance on the basis of disability. The success of the integration program brought with it a number of administrative and financial problems which were the subject of three government enquiries. Central to these difficulties was the development of a dual system of special education provision. On one hand, additional resources were provided for the students attending segregated special schools by means of weighted student ratios, with one teacher being provided for each six students attending a special school. On the other hand, the requirements of individual students integrated into regular schools were assessed by school-based committees on the basis of their perceived extra educational needs. The major criticism of this dual system of special education funding was that it created inequities in the distribution of resources both between the systems and also within the systems. For example, three students with equivalent needs, one of whom attended a special school and two of whom attended different regular schools could each be funded at substantially differing levels. The solution to these inequities of funding was seen to be in the development of a needs based funding device which encompassed all students in receipt of additional disability related educational support. The Student Resource Index developed in this thesis is a set of behavioural descriptors designed to assess degree of additional educational need across a number of disability domains. These domains include hearing, vision, communication, health, co-ordination (manual and mobility), intellectual capacity and behaviour. The completed Student Resource Index provides a profile of the students’ needs across all of these domains and as such addresses the multiple nature of many disabling conditions. The Student Resource Index was validated in terms of its capacity to predict the ‘known’ membership or the type of special school which some 1200 students in the sample currently attended. The decision to use the existing special school populations as the criterion against which the Student Resource Index was validated was based on the premise that the differing resource levels of these schools had been historically determined by expert opinion, industrial negotiation and reference to other special education systems as the most reliable estimate of the enrolled students’ needs. When discriminant function analysis was applied to some 178 students attending one school for students with mild intellectual disability and one facility for students with moderate to severe intellectual disability the Student Resource Index was successful in predicting the student's known school in 92 percent of cases. An analysis of those students (8 percent) which the Student Resource Index had failed to predict their known school enrolment revealed that 13 students had, for a variety of reasons, been inappropriately placed in these settings. When these students were removed from the sample the predictive accuracy of the Student Resource Index was raised to 96 percent of the sample. By comparison the domains of the Vineland Adaptive Behaviour Scale accurately predicted known enrolments of 76 percent of the sample. By way of replication discriminant function analysis was then applied to the Student Resource Index profiles of 518 students attending Day Special Schools (Mild Intellectual Disability) and 287 students attending Special Developmental Schools (Moderate to Severe Intellectual Disability). In this case, the Student Resource Index profiles were successful in predicting the known enrolments of 85 percent of students. When a third group was added, 147 students attending Day Special Schools for students with physical disabilities, the Student Resource Index predicted known enrolments in 80 percent of cases. The addition of a fourth group of 116 students attending Day Special Schools (Hearing Impaired) to the discriminant analysis led to a small reduction in predictive accuracy from 80 percent to 78 percent of the sample. A final analysis which included students attending a School for the Deaf-Blind, a Hospital School and a Social and Behavioural Unit was successful in predicting known enrolments in 71 percent of the 1114 students in the sample. For reasons which are expanded upon within the thesis it was concluded that the Student Resource Index when used in conjunction with discriminant function analysis was capable of isolating four distinct groups on the basis of their additional educational needs. If the historically determined and varied funding levels provided to these groups, inherent in the cash equivalent of the staffing ratios of Day Special Schools (Mild Intellectual Disability), Special Development Schools (Moderate to Severe Intellectual Disability), Day Special Schools (Physical Disability) and Day Special Schools (Hearing Impairment) are accepted as reasonable reflections of these students’ needs these funding levels can be translated into funding bands. These funding bands can then be applied to students in segregated or inclusive placements. The thesis demonstrates that a new applicant for funding can be introduced into the existing data base and by the use of discriminant function analysis be allocated to one of the four groups. The analysis is in effect saying that this new student’s profile of educational needs has more in common with Group A than with the members of Groups B, C, or D. The student would then be funded at Group A level. It is immaterial from a funding point of view whether the student decides to attend a segregated or inclusive setting. The thesis then examines the impact of the introduction of Student Resource Index based funding upon the current funding of the special schools in one of the major metropolitan regions. Overall, such an initiative would lead to a reduction of 1.54 percent of the total funding accruing to the region’s special schools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents Relation Based Modelling as an extension to the Feature Based Modelling approach to student modelling. Relation Based Modelling dynamically creates new terms allowing the instructional designer to specify a set of primitives and operators from which the modelling system will create the necessary elements. Focal modelling is a new technique devised to manipulate and coordinate the addition of new terms. The thesis presents an evaluation of student modelling systems based on predictive accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article reports findings from a series of empirical studies investigating whether poor release planning might contribute to sex offender recidivism. A coding protocol was developed to measure the comprehensiveness of release planning which included items relating to accommodation, employment, pro-social support, community based treatment, and the Good Lives Model (T. Ward & C.A. Stewart, 2003) secondary goods. The protocol was retrospectively applied to groups of recidivist and non recidivist child molesters, matched on static risk level and time since release. As predicted, overall release planning was significantly poorer for recidivists compared to non recidivists. The accommodation, employment, and social support items combined to best predict recidivism, with predictive accuracy comparable to that obtained using static risk models. Results highlighted the importance of release planning in efforts to reduce sex offender recidivism. Implications for policy makers and community members are briefly discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to put forward an innovative approach for reducing the variation between Type I and Type II errors in the context of ratio-based modeling of corporate collapse, without compromising the accuracy of the predictive model. Its contribution to the literature lies in resolving the problematic trade-off between predictive accuracy and variations between the two types of errors.

Design/methodology/approach – The methodological approach in this paper – called MCCCRA – utilizes a novel multi-classification matrix based on a combination of correlation and regression analysis, with the former being subject to optimisation criteria. In order to ascertain its accuracy in signaling collapse, MCCCRA is empirically tested against multiple discriminant analysis (MDA).

Findings –
Based on a data sample of 899 US publicly listed companies, the empirical results indicate that in addition to a high level of accuracy in signaling collapse, MCCCRA generates lower variability between Type I and Type II errors when compared to MDA.

Originality/value –
Although correlation and regression analysis are long-standing statistical tools, the optimisation constraints that are applied to the correlations are unique. Moreover, the multi-classification matrix is a first in signaling collapse. By providing economic insight into more stable financial modeling, these innovations make an original contribution to the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research question: A major barrier to retaining existing customers is the difficulty in knowing who is most at risk of leaving (or ‘churning’). Given the strategic and financial importance of season ticket holders (STH) to professional sport teams, this paper examines the effectiveness of a range of variables in identifying the STH who are most likely to churn.
Research methods: A longitudinal field study was undertaken to reflect actual conditions. Survey data of a professional sport team STH were collected prior to the conclusion of the season. Actual renewal data were then tracked from team records the following season. This work was replicated across five professional sport teams from the Australian Football League, with renewal predictions made and tracked for over 10,000 STH.
Results and findings: The results suggest that the ‘Juster’ Scale – a simple, one-item purchase probability measure – is an effective identifier of those most at risk of churning, more than 3 months in advance. When combined with ticket utilization and tenure measures, predictive accuracy improves markedly, to the point where these three measures can be used to provide an effective early warning system for managers.
Implications: Whilst there is a tendency to view STH as highly loyal, these data reinforce the importance of actively managing all customers to reduce churn. Despite their commitment, STH do churn, but those most likely to can be predicted by examining their patterns of behaviour in the current season. Efforts to retain STH need to shift their focus from transactional value assessments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The superior characteristics of high photon flux and diffraction-limited spatial resolution achieved by synchrotron-FTIR microspectroscopy allowed molecular characterization of individual live thraustochytrids. Principal component analysis revealed distinct separation of the single live cell spectra into their corresponding strains, comprised of new Australasian thraustochytrids (AMCQS5-5 and S7) and standard cultures (AH-2 and S31). Unsupervised hierarchical cluster analysis (UHCA) indicated close similarities between S7 and AH-7 strains, with AMCQS5-5 being distinctly different. UHCA correlation conformed well to the fatty acid profiles, indicating the type of fatty acids as a critical factor in chemotaxonomic discrimination of these thraustochytrids and also revealing the distinctively high polyunsaturated fatty acid content as key identity of AMCQS5-5. Partial least squares discriminant analysis using cross-validation approach between two replicate datasets was demonstrated to be a powerful classification method leading to models of high robustness and 100% predictive accuracy for strain identification. The results emphasized the exceptional S-FTIR capability to perform real-time in vivo measurement of single live cells directly within their original medium, providing unique information on cell variability among the population of each isolate and evidence of spontaneous lipid peroxidation that could lead to deeper understanding of lipid production and oxidation in thraustochytrids for single-cell oil development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This book offers a way to predict which brand a buyer will purchase. It looks at brandperformance within a product category and tests it in different countries with verydifferent cultures. Following the Predictive Brand Choice (PBC) model, this book seeks to predict a consumer’s loyalty and choice. Results have shown that PBC can achieve a high level of predictive accuracy, in excess of 70% in mature markets. This accuracy holds even in the face of price competition from a less preferred brand.PBC uses a prospective predicting method which does not have to rely on a brand’spast performance or a customer’s purchase history for prediction. Choice data isgathered in the retail setting – at the point of sale. The Strategy of Global Brandingand Brand Equity presents survey data and quantitative analyses that prove themethod described to be practical, useful and implementable for both researchers and practitioners of commercial brand strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

 Aim: The purpose of this study was to create predictive species distribution models (SDMs) for temperate reef-associated fish species densities and fish assemblage diversity and richness to aid in marine conservation and spatial planning. Location: California, USA. Methods: Using generalized additive models, we associated fish species densities and assemblage characteristics with seafloor structure, giant kelp biomass and wave climate and used these associations to predict the distribution and assemblage structure across the study area. We tested the accuracy of these predicted extrapolations using an independent data set. The SDMs were also used to estimate larger scale abundances to compare with other estimates of species abundance (uniform density extrapolation over rocky reef and density extrapolations taking into account variations in geomorphic structure). Results: The SDMs successfully modelled the species-habitat relationships of seven rocky reef-associated fish species and showed that species' densities differed in their relationships with environmental variables. The predictive accuracy of the SDMs ranged from 0.26 to 0.60 (Pearson's r correlation between observed and predicted density values). The SDMs created for the fish assemblage-level variables had higher prediction accuracies with Pearson's r values of 0.61 for diversity and 0.71 for richness. The comparisons of the different methods for extrapolating species densities over a single marine protected area varied greatly in their abundance estimates with the uniform extrapolation (density values extrapolated evenly over the rocky reef) always estimating much greater abundances. The other two methods, which took into account variation in the geomorphic structure of the reef, provided much lower abundance estimates. Main conclusions: Species distribution models that combine geomorphic, oceanographic and biogenic habitat variables can reliably predict spatial patterns of species density and assemblage attributes of temperate reef fishes at spatial scales of 50 m. Thus, SDMs show great promise for informing spatial and ecosystem-based approaches to conservation and fisheries management. © 2015 John Wiley

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho tem por objetivo avaliar para o caso brasileiro uma das mais importantes propriedades esperadas de um núcleo: ser um bom previsor da inflação plena futura. Para tanto, foram utilizados como referência para comparação dois modelos construídos a partir das informações mensais do IPCA e seis modelos VAR referentes a cada uma das medidas de núcleo calculadas pelo Banco Central do Brasil. O desempenho das previsões foi avaliado pela comparação dos resultados do erro quadrático médio e pela aplicação da metodologia de Diebold-Mariano (1995) de comparação de modelos. Os resultados encontrados indicam que o atual conjunto de medidas de núcleos calculado pelo Banco Central não atende pelos critérios utilizados neste trabalho a essa característica desejada.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the first essay, "Determinants of Credit Expansion in Brazil", analyzes the determinants of credit using an extensive bank level panel dataset. Brazilian economy has experienced a major boost in leverage in the first decade of 2000 as a result of a set factors ranging from macroeconomic stability to the abundant liquidity in international financial markets before 2008 and a set of deliberate decisions taken by President Lula's to expand credit, boost consumption and gain political support from the lower social strata. As relevant conclusions to our investigation we verify that: credit expansion relied on the reduction of the monetary policy rate, international financial markets are an important source of funds, payroll-guaranteed credit and investment grade status affected positively credit supply. We were not able to confirm the importance of financial inclusion efforts. The importance of financial sector sanity indicators of credit conditions cannot be underestimated. These results raise questions over the sustainability of this expansion process and financial stability in the future. The second essay, “Public Credit, Monetary Policy and Financial Stability”, discusses the role of public credit. The supply of public credit in Brazil has successfully served to relaunch the economy after the Lehman-Brothers demise. It was later transformed into a driver for economic growth as well as a regulation device to force private banks to reduce interest rates. We argue that the use of public funds to finance economic growth has three important drawbacks: it generates inflation, induces higher loan rates and may induce financial instability. An additional effect is the prevention of market credit solutions. This study contributes to the understanding of the costs and benefits of credit as a fiscal policy tool. The third essay, “Bayesian Forecasting of Interest Rates: Do Priors Matter?”, discusses the choice of priors when forecasting short-term interest rates. Central Banks that commit to an Inflation Target monetary regime are bound to respond to inflation expectation spikes and product hiatus widening in a clear and transparent way by abiding to a Taylor rule. There are various reports of central banks being more responsive to inflationary than to deflationary shocks rendering the monetary policy response to be indeed non-linear. Besides that there is no guarantee that coefficients remain stable during time. Central Banks may switch to a dual target regime to consider deviations from inflation and the output gap. The estimation of a Taylor rule may therefore have to consider a non-linear model with time varying parameters. This paper uses Bayesian forecasting methods to predict short-term interest rates. We take two different approaches: from a theoretic perspective we focus on an augmented version of the Taylor rule and include the Real Exchange Rate, the Credit-to-GDP and the Net Public Debt-to-GDP ratios. We also take an ”atheoretic” approach based on the Expectations Theory of the Term Structure to model short-term interest. The selection of priors is particularly relevant for predictive accuracy yet, ideally, forecasting models should require as little a priori expert insight as possible. We present recent developments in prior selection, in particular we propose the use of hierarchical hyper-g priors for better forecasting in a framework that can be easily extended to other key macroeconomic indicators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.