120 resultados para model validation
Resumo:
We report the first steps of a collaborative project between the University of Queensland, Polyflow, Michelin, SK Chemicals, and RMIT University; on simulation, validation and application of a recently introduced constitutive model designed to describe branched polymers. Whereas much progress has been made on predicting the complex flow behaviour of many - in particular linear - polymers, it sometimes appears difficult to predict simultaneously shear thinning and extensional strain hardening behaviour using traditional constitutive models. Recently a new viscoelastic model based on molecular topology, was proposed by McLeish and Larson (1998). We explore the predictive power of a differential multi-mode version of the pom-pom model for the flow behaviour of two commercial polymer melts: a (long-chain branched) low-density polyethylene (LDPE) and a (linear) high-density polyethylene (HDPE). The model responses are compared to elongational recovery experiments published by Langouche and Debbaut (1999), and start-up of simple shear flow, stress relaxation after simple and reverse step strain experiments carried out in our laboratory.
Resumo:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Evaluation of the performance of the APACHE III (Acute Physiology and Chronic Health Evaluation) ICU (intensive care unit) and hospital mortality models at the Princess Alexandra Hospital, Brisbane is reported. Prospective collection of demographic, diagnostic, physiological, laboratory, admission and discharge data of 5681 consecutive eligible admissions (1 January 1995 to 1 January 2000) was conducted at the Princess Alexandra Hospital, a metropolitan Australian tertiary referral medical/surgical adult ICU. ROC (receiver operating characteristic) curve areas for the APACHE III ICU mortality and hospital mortality models demonstrated excellent discrimination. Observed ICU mortality (9.1%) was significantly overestimated by the APACHE III model adjusted for hospital characteristics (10.1%), but did not significantly differ from the prediction of the generic APACHE III model (8.6%). In contrast, observed hospital mortality (14.8%) agreed well with the prediction of the APACHE III model adjusted for hospital characteristics (14.6%), but was significantly underestimated by the unadjusted APACHE III model (13.2%). Calibration curves and goodness-of-fit analysis using Hosmer-Lemeshow statistics, demonstrated that calibration was good with the unadjusted APACHE III ICU mortality model, and the APACHE III hospital mortality model adjusted for hospital characteristics. Post hoc analysis revealed a declining annual SMR (standardized mortality rate) during the study period. This trend was present in each of the non-surgical, emergency and elective surgical diagnostic groups, and the change was temporally related to increased specialist staffing levels. This study demonstrates that the APACHE III model performs well on independent assessment in an Australian hospital. Changes observed in annual SMR using such a validated model support an hypothesis of improved survival outcomes 1995-1999.
Resumo:
Today, the standard approach for the kinetic analysis of dynamic PET studies is compartment models, in which the tracer and its metabolites are confined to a few well-mixed compartments. We examine whether the standard model is suitable for modern PET data or whether theories including more physiologic realism can advance the interpretation of dynamic PET data. A more detailed microvascular theory is developed for intravascular tracers in single-capillary and multiple-capillary systems. The microvascular models, which account for concentration gradients in capillaries, are validated and compared with the standard model in a pig liver study. Methods: Eight pigs underwent a 5-min dynamic PET study after O-15-carbon monoxide inhalation. Throughout each experiment, hepatic arterial blood and portal venous blood were sampled, and flow was measured with transit-time flow meters. The hepatic dual-inlet concentration was calculated as the flow-weighted inlet concentration. Dynamic PET data were analyzed with a traditional single-compartment model and 2 microvascular models. Results: Microvascular models provided a better fit of the tissue activity of an intravascular tracer than did the compartment model. In particular, the early dynamic phase after a tracer bolus injection was much improved. The regional hepatic blood flow estimates provided by the microvascular models (1.3 +/- 0.3 mL min(-1) mL(-1) for the single-capillary model and 1.14 +/- 0.14 min(-1) mL(-1) for the multiple-capillary model) (mean +/- SEM mL of blood min(-1) mL of liver tissue(-1)) were in agreement with the total blood flow measured by flow meters and normalized to liver weight (1.03 +/- 0.12 mL min(-1) mL(-1)). Conclusion: Compared with the standard compartment model, the 2 microvascular models provide a superior description of tissue activity after an intravascular tracer bolus injection. The microvascular models include only parameters with a clear-cut physiologic interpretation and are applicable to capillary beds in any organ. In this study, the microvascular models were validated for the liver and provided quantitative regional flow estimates in agreement with flow measurements.
Resumo:
Predictions of flow patterns in a 600-mm scale model SAG mill made using four classes of discrete element method (DEM) models are compared to experimental photographs. The accuracy of the various models is assessed using quantitative data on shoulder, toe and vortex center positions taken from ensembles of both experimental and simulation results. These detailed comparisons reveal the strengths and weaknesses of the various models for simulating mills and allow the effect of different modelling assumptions to be quantitatively evaluated. In particular, very close agreement is demonstrated between the full 3D model (including the end wall effects) and the experiments. It is also demonstrated that the traditional two-dimensional circular particle DEM model under-predicts the shoulder, toe and vortex center positions and the power draw by around 10 degrees. The effect of particle shape and the dimensionality of the model are also assessed, with particle shape predominantly affecting the shoulder position while the dimensionality of the model affects mainly the toe position. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Glycogen-accumulating organisms (GAO) have the potential to directly compete with polyphosphate-accumulating organisms (PAO) in EBPR systems as both are able to take up VFA anaerobically and grow on the intracellular storage products aerobically. Under anaerobic conditions GAO hydrolyse glycogen to gain energy and reducing equivalents to take up VFA and to synthesise polyhydroxyalkanoate (PHA). In the subsequent aerobic stage, PHA is being oxidised to gain energy for glycogen replenishment (from PHA) and for cell growth. This article describes a complete anaerobic and aerobic model for GAO based on the understanding of their metabolic pathways. The anaerobic model has been developed and reported previously, while the aerobic metabolic model was developed in this study. It is based on the assumption that acetyl-CoA and propionyl-CoA go through the catabolic and anabolic processes independently. Experimental validation shows that the integrated model can predict the anaerobic and aerobic results very well. It was found in this study that at pH 7 the maximum acetate uptake rate of GAO was slower than that reported for PAO in the anaerobic stage. On the other hand, the net biomass production per C-mol acetate added is about 9% higher for GAO than for PAO. This would indicate that PAO and GAO each have certain competitive advantages during different parts of the anaerobic/aerobic process cycle. (C) 2002 Wiley Periodicals, Inc.
Resumo:
Despite its widespread use, the Coale-Demeny model life table system does not capture the extensive variation in age-specific mortality patterns observed in contemporary populations, particularly those of the countries of Eastern Europe and populations affected by HIV/AIDS. Although relational mortality models, such as the Brass logit system, can identify these variations, these models show systematic bias in their predictive ability as mortality levels depart from the standard. We propose a modification of the two-parameter Brass relational model. The modified model incorporates two additional age-specific correction factors (gamma(x), and theta(x)) based on mortality levels among children and adults, relative to the standard. Tests of predictive validity show deviations in age-specific mortality rates predicted by the proposed system to be 30-50 per cent lower than those predicted by the Coale-Demeny system and 15-40 per cent lower than those predicted using the original Brass system. The modified logit system is a two-parameter system, parameterized using values of l(5) and l(60).
Resumo:
Orthotopic liver retransplantation (re-OLT) is highly controversial. The objectives of this study were to determine the validity of a recently developed United Network for Organ Sharing (UNOS) multivariate model using an independent cohort of patients undergoing re-OLT outside the United States, to determine whether incorporation of other variables that were incomplete in the UNOS registry would provide additional prognostic information, to develop new models combining data sets from both cohorts, and to evaluate the validity of the model for end-stage liver disease (MELD) in patients undergoing re-OLT. Two hundred eighty-one adult patients undergoing re-OLT (between 1986 and 1999) at 6 foreign transplant centers comprised the validation cohort. We found good agreement between actual survival and predicted survival in the validation cohort; 1-year patient survival rates in the low-, intermediate-, and high-risk groups (as assigned by the original UNOS model) were 72%, 68%, and 36%, respectively (P < .0001). In the patients for whom the international normalized ratio (INR) of prothrombin time was available, MELD correlated with outcome following re-OLT; the median MELD scores for patients surviving at least 90 days compared with those dying within 90 days were 20.75 versus 25.9, respectively (P = .004). Utilizing both patient cohorts (n = 979), a new model, based on recipient age, total serum bilirubin, creatinine, and interval to re-OLT, was constructed (whole model χ(2) = 105, P < .0001). Using the c-statistic with 30-day, 90-day, 1-year, and 3-year mortality as the end points, the area under the receiver operating characteristic (ROC) curves for 4 different models were compared. In conclusion, prospective validation and use of these models as adjuncts to clinical decision making in the management of patients being considered for re-OLT are warranted.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Objective: The Temptation and Restraint Inventory (TRI) is commonly used to measure drinking restraint in relation to problem drinking behavior. However, as yet the TRI has not been validated in a clinical group with alcohol dependence. Method: Male (n = 111) and female (n = 57) inpatients with DSM-IV diagnosed alcohol dependence completed the TRI and measures of problem drinking severity, including the Alcohol Dependence Scale and the quantity, frequency and week total of alcohol consumed. Results: The factor structure of the TRI was replicated in the alcohol dependent sample. Cognitive Emotional Preoccupation (CEP), one of the two higher order factors of the TRI, demonstrated sound predictive power toward all dependence severity indices. The other higher order factor, Cognitive Behavioral Control (CBC), was related to frequency of drinking. There was limited support for the CEP/CBC interactional model of drinking restraint. Conclusions: Although the construct validity of the TRI was sound, the measure appears more useful in understanding the development, maintenance and severity of alcohol-related problems in nondependent drinkers. The TRI may show promise in detecting either continuous drinking or heavy episodic type dependent drinkers.
Resumo:
Aims The aims of this study are to develop and validate a measure to screen for a range of gambling-related cognitions (GRC) in gamblers. Design and participants A total of 968 volunteers were recruited from a community-based population. They were divided randomly into two groups. Principal axis factoring with varimax rotation was performed on group one and confirmatory factor analysis (CFA) was used on group two to confirm the best-fitted solution. Measurements The Gambling Related Cognition Scale (GRCS) was developed for this study and the South Oaks Gambling Screen (SOGS), the Motivation Towards Gambling Scale (MTGS) and the Depression Anxiety Stress Scale (DASS-2 1) were used for validation. Findings Exploratory factor analysis performed using half the sample indicated five factors, which included interpretative control/bias (GRCS-IB), illusion of control (GRCS-IC), predictive control (GRCS-PC), gambling-related expectancies (GRCS-GE) and a perceived inability to stop gambling (GRCS-IS). These accounted for 70% of the total variance. Using the other half of the sample, CFA confirmed that the five-factor solution fitted the data most effectively. Cronbach's alpha coefficients for the factors ranged from 0.77 to 0.91, and 0.93 for the overall scale. Conclusions This paper demonstrated that the 23-item GRCS has good psychometric properties and thus is a useful instrument for identifying GRC among non-clinical gamblers. It provides the first step towards devising/adapting similar tools for problem gamblers as well as developing more specialized instruments to assess particular domains of GRC.
Resumo:
The urge to gamble is a physiological, psychological, or emotional motivational state, often associated with continued gambling. The authors developed and validated the 6-item Gambling Urge Questionnaire (GUS), which was based on the 8-item Alcohol Urge Questionnaire (M. J. Bohn, D. D. Krahn, & B. A. Staehler, 1995), using 968 community-based participants. Exploratory factor analysis using half of the sample indicated a 1-factor solution that accounted for 55.18% of the total variance. This was confirmed using confirmatory factor analysis with the other half of the sample. The GUS had a Cronbach's alpha coefficient of .81. Concurrent, predictive, and criterion-related validity of the GUS were good, suggesting that the GUS is a valid and reliable instrument for assessing gambling urges among nonclinical gamblers.
Resumo:
Sorghum is the main dryland summer crop in NE Australia and a number of agricultural businesses would benefit from an ability to forecast production likelihood at regional scale. In this study we sought to develop a simple agro-climatic modelling approach for predicting shire (statistical local area) sorghum yield. Actual shire yield data, available for the period 1983-1997 from the Australian Bureau of Statistics, were used to train the model. Shire yield was related to a water stress index (SI) that was derived from the agro-climatic model. The model involved a simple fallow and crop water balance that was driven by climate data available at recording stations within each shire. Parameters defining the soil water holding capacity, maximum number of sowings (MXNS) in any year, planting rainfall requirement, and critical period for stress during the crop cycle were optimised as part of the model fitting procedure. Cross-validated correlations (CVR) ranged from 0.5 to 0.9 at shire scale. When aggregated to regional and national scales, 78-84% of the annual variation in sorghum yield was explained. The model was used to examine trends in sorghum productivity and the approach to using it in an operational forecasting system was outlined. (c) 2005 Elsevier B.V. All rights reserved.
A simulation model of cereal-legume intercropping systems for semi-arid regions I. Model development
Resumo:
Cereal-legume intercropping plays an important role in subsistence food production in developing countries, especially in situations of limited water resources. Crop simulation can be used to assess risk for intercrop productivity over time and space. In this study, a simple model for intercropping was developed for cereal and legume growth and yield, under semi-arid conditions. The model is based on radiation interception and use, and incorporates a water stress factor. Total dry matter and yield are functions of photosynthetically active radiation (PAR), the fraction of radiation intercepted and radiation use efficiency (RUE). One of two PAR sub-models was used to estimate PAR from solar radiation; either PAR is 50% of solar radiation or the ratio of PAR to solar radiation (PAR/SR) is a function of the clearness index (K-T). The fraction of radiation intercepted was calculated either based on Beer's Law with crop extinction coefficients (K) from field experiments or from previous reports. RUE was calculated as a function of available soil water to a depth of 900 mm (ASW). Either the soil water balance method or the decay curve approach was used to determine ASW. Thus, two alternatives for each of three factors, i.e., PAR/SR, K and ASW, were considered, giving eight possible models (2 methods x 3 factors). The model calibration and validation were carried out with maize-bean intercropping systems using data collected in a semi-arid region (Bloemfontein, Free State, South Africa) during seven growing seasons (1996/1997-2002/2003). The combination of PAR estimated from the clearness index, a crop extinction coefficient from the field experiment and the decay curve model gave the most reasonable and acceptable result. The intercrop model developed in this study is simple, so this modelling approach can be employed to develop other cereal-legume intercrop models for semi-arid regions. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.