131 resultados para Predictive modelling

em University of Queensland eSpace - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This review reflects the state of the art in study of contact and dynamic phenomena occurring in cold roll forming. The importance of taking these phenomena into account is determined by significant machine time and tooling costs spent on worn out forming rolls replacement and equipment adjustment in cold roll forming. Predictive modelling of the tool wear caused by contact and dynamic phenomena can reduce the production losses in this technological process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Promiscuous T-cell epitopes make ideal targets for vaccine development. We report here a computational system, multipred, for the prediction of peptide binding to the HLA-A2 supertype. It combines a novel representation of peptide/MHC interactions with a hidden Markov model as the prediction algorithm. multipred is both sensitive and specific, and demonstrates high accuracy of peptide-binding predictions for HLA-A*0201, *0204, and *0205 alleles, good accuracy for *0206 allele, and marginal accuracy for *0203 allele. multipred replaces earlier requirements for individual prediction models for each HLA allelic variant and simplifies computational aspects of peptide-binding prediction. Preliminary testing indicates that multipred can predict peptide binding to HLA-A2 supertype molecules with high accuracy, including those allelic variants for which no experimental binding data are currently available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

1. Cluster analysis of reference sites with similar biota is the initial step in creating River Invertebrate Prediction and Classification System (RIVPACS) and similar river bioassessment models such as Australian River Assessment System (AUSRIVAS). This paper describes and tests an alternative prediction method, Assessment by Nearest Neighbour Analysis (ANNA), based on the same philosophy as RIVPACS and AUSRIVAS but without the grouping step that some people view as artificial. 2. The steps in creating ANNA models are: (i) weighting the predictor variables using a multivariate approach analogous to principal axis correlations, (ii) calculating the weighted Euclidian distance from a test site to the reference sites based on the environmental predictors, (iii) predicting the faunal composition based on the nearest reference sites and (iv) calculating an observed/expected (O/E) analogous to RIVPACS/AUSRIVAS. 3. The paper compares AUSRIVAS and ANNA models on 17 datasets representing a variety of habitats and seasons. First, it examines each model's regressions for Observed versus Expected number of taxa, including the r(2), intercept and slope. Second, the two models' assessments of 79 test sites in New Zealand are compared. Third, the models are compared on test and presumed reference sites along a known trace metal gradient. Fourth, ANNA models are evaluated for western Australia, a geographically distinct region of Australia. The comparisons demonstrate that ANNA and AUSRIVAS are generally equivalent in performance, although ANNA turns out to be potentially more robust for the O versus E regressions and is potentially more accurate on the trace metal gradient sites. 4. The ANNA method is recommended for use in bioassessment of rivers, at least for corroborating the results of the well established AUSRIVAS- and RIVPACS-type models, if not to replace them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear relationship between work accomplished (W-lim) and time to exhaustion (t(lim)) can be described by the equation: W-lim = a + CP.t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five art-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W-lim-t(lim) regression and calculated three ways: 1) using the first, third and fifth W-lim-t(lim) coordinates (I-135), 2) using coordinates from the three highest power outputs (I-123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I-345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0 +/- 37.9W) > CPI135 (176.1 +/- 27.6W) > CPI345 (164.0 +/- 22.8W) (P < 0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P < 0.05). The shorter the predictive trials, the greater the slope of the W-lim-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain for a very long time without fatigue then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many granulation plants operate well below design capacity, suffering from high recycle rates and even periodic instabilities. This behaviour cannot be fully predicted using the present models. The main objective of the paper is to provide an overview of the current status of model development for granulation processes and suggest future directions for research and development. The end-use of the models is focused on the optimal design and control of granulation plants using the improved predictions of process dynamics. The development of novel models involving mechanistically based structural switching methods is proposed in the paper. A number of guidelines are proposed for the selection of control relevant model structures. (C) 2002 Published by Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Promiscuous human leukocyte antigen (HLA) binding peptides are ideal targets for vaccine development. Existing computational models for prediction of promiscuous peptides used hidden Markov models and artificial neural networks as prediction algorithms. We report a system based on support vector machines that outperforms previously published methods. Preliminary testing showed that it can predict peptides binding to HLA-A2 and -A3 super-type molecules with excellent accuracy, even for molecules where no binding data are currently available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broccoli is a vegetable crop of increasing importance in Australia, particularly in south-east Queensland and farmers need to maintain a regular supply of good quality broccoli to meet the expanding market. A predictive model of ontogeny, incorporating climatic data including frost risk, would enable farmers to predict harvest maturity date and select appropriate cultivar – sowing date combinations. To develop procedures for predicting ontogeny, yield and quality, field studies using three cultivars, ‘Fiesta’, ‘Greenbelt’ and ‘Marathon’, were sown on eight dates from 11 March to 22 May 1997, and grown under natural and extended (16 h) photoperiods at the University of Queensland, Gatton Campus. Cultivar, rather than the environment, mainly determined head quality attributes of head shape and branching angle. Yield and quality were not influenced by photoperiod. A better understanding of genotype and environmental interactions will help farmers optimise yield and quality, by matching cultivars with time of sowing. The estimated base and optimum temperature for broccoli development were 0°C and 20 °C, respectively, and were consistent across cultivars, but thermal time requirements for phenological intervals were cultivar specific. Differences in thermal time requirement from floral initiation to harvest maturity between cultivars were small and of little importance, but differences in thermal time requirement from emergence to floral initiation were large. Sensitivity to photoperiod and solar radiation was low in the three cultivars used. This research has produced models to assist broccoli farmers in crop scheduling and cultivar selection in south-east Queensland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large number of models have been derived from the two-parameter Weibull distribution and are referred to as Weibull models. They exhibit a wide range of shapes for the density and hazard functions, which makes them suitable for modelling complex failure data sets. The WPP and IWPP plot allows one to determine in a systematic manner if one or more of these models are suitable for modelling a given data set. This paper deals with this topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.