190 resultados para Predictive model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this second counterpoint article, we refute the claims of Landy, Locke, and Conte, and make the more specific case for our perspective, which is that ability-based models of emotional intelligence have value to add in the domain of organizational psychology. In this article, we address remaining issues, such as general concerns about the tenor and tone of the debates on this topic, a tendency for detractors to collapse across emotional intelligence models when reviewing the evidence and making judgments, and subsequent penchant to thereby discount all models, including the ability-based one, as lacking validity. We specifically refute the following three claims from our critics with the most recent empirically based evidence: (1) emotional intelligence is dominated by opportunistic academics-turned-consultants who have amassed much fame and fortune based on a concept that is shabby science at best; (2) the measurement of emotional intelligence is grounded in unstable, psychometrically flawed instruments, which have not demonstrated appropriate discriminant and predictive validity to warrant/justify their use; and (3) there is weak empirical evidence that emotional intelligence is related to anything of importance in organizations. We thus end with an overview of the empirical evidence supporting the role of emotional intelligence in organizational and social behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Montreal Process indicators are intended to provide a common framework for assessing and reviewing progress toward sustainable forest management. The potential of a combined geometrical-optical/spectral mixture analysis model was assessed for mapping the Montreal Process age class and successional age indicators at a regional scale using Landsat Thematic data. The project location is an area of eucalyptus forest in Emu Creek State Forest, Southeast Queensland, Australia. A quantitative model relating the spectral reflectance of a forest to the illumination geometry, slope, and aspect of the terrain surface and the size, shape, and density, and canopy size. Inversion of this model necessitated the use of spectral mixture analysis to recover subpixel information on the fractional extent of ground scene elements (such as sunlit canopy, shaded canopy, sunlit background, and shaded background). Results obtained fron a sensitivity analysis allowed improved allocation of resources to maximize the predictive accuracy of the model. It was found that modeled estimates of crown cover projection, canopy size, and tree densities had significant agreement with field and air photo-interpreted estimates. However, the accuracy of the successional stage classification was limited. The results obtained highlight the potential for future integration of high and moderate spatial resolution-imaging sensors for monitoring forest structure and condition. (C) Elsevier Science Inc., 2000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Agricultural Production Systems slMulator, APSIM, is a cropping system modelling environment that simulates the dynamics of soil-plant-management interactions within a single crop or a cropping system. Adaptation of previously developed crop models has resulted in multiple crop modules in APSIM, which have low scientific transparency and code efficiency. A generic crop model template (GCROP) has been developed to capture unifying physiological principles across crops (plant types) and to provide modular and efficient code for crop modelling. It comprises a standard crop interface to the APSIM engine, a generic crop model structure, a crop process library, and well-structured crop parameter files. The process library contains the major science underpinning the crop models and incorporates generic routines based on physiological principles for growth and development processes that are common across crops. It allows APSIM to simulate different crops using the same set of computer code. The generic model structure and parameter files provide an easy way to test, modify, exchange and compare modelling approaches at process level without necessitating changes in the code. The standard interface generalises the model inputs and outputs, and utilises a standard protocol to communicate with other APSIM modules through the APSIM engine. The crop template serves as a convenient means to test new insights and compare approaches to component modelling, while maintaining a focus on predictive capability. This paper describes and discusses the scientific basis, the design, implementation and future development of the crop template in APSIM. On this basis, we argue that the combination of good software engineering with sound crop science can enhance the rate of advance in crop modelling. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the first steps of a collaborative project between the University of Queensland, Polyflow, Michelin, SK Chemicals, and RMIT University; on simulation, validation and application of a recently introduced constitutive model designed to describe branched polymers. Whereas much progress has been made on predicting the complex flow behaviour of many - in particular linear - polymers, it sometimes appears difficult to predict simultaneously shear thinning and extensional strain hardening behaviour using traditional constitutive models. Recently a new viscoelastic model based on molecular topology, was proposed by McLeish and Larson (1998). We explore the predictive power of a differential multi-mode version of the pom-pom model for the flow behaviour of two commercial polymer melts: a (long-chain branched) low-density polyethylene (LDPE) and a (linear) high-density polyethylene (HDPE). The model responses are compared to elongational recovery experiments published by Langouche and Debbaut (1999), and start-up of simple shear flow, stress relaxation after simple and reverse step strain experiments carried out in our laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/Aims: Insulin resistance and systemic hypertension are predictors of advanced fibrosis in obese patients with non-alcoholic fatty liver disease (NAFLD). Genetic factors may also be important. We hypothesize that high angiotensinogen (AT) and transforming growth factor-beta1 (TGF-beta1) producing genotypes increase the risk of liver fibrosis in obese subjects with NAFLD. Methods: One hundred and five of 130 consecutive severely obese patients having a liver biopsy at the time of laparoscopic obesity surgery agreed to have genotype analysis. Influence of specific genotype or combination of genotypes on the stage of hepatic fibrosis was assessed after controlling for known risk factors. Results: There was no fibrosis in 70 (67%), stages 1-2 in 21 (20%) and stages 3-4 fibrosis in 14 (13%) of subjects. There was no relationship between either high AT or TGF-beta1 producing genotypes alone and hepatic fibrosis after controlling for confounding factors. However, advanced hepatic fibrosis occurred in five of 13 subjects (odds ratio 5.7, 95% confidence interval 1.5-21.2, P = 0.005) who inherited both high AT and TGF-beta1 producing polymorphisms. Conclusions: The combination of high AT and TGF-beta1 producing polymorphisms is associated with advanced hepatic fibrosis in obese patients with NAFLD. These findings support the hypothesis that angiotensin II stimulated TGF-beta1 production may promote hepatic fibrosis. (C) 2003 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive testing is one of the new genetic technologies which, in conjunction with developing fields such as pharmacogenomics, promises many benefits for preventive and population health. Understanding how individuals appraise and make genetic test decisions is increasingly relevant as the technology expands. Lay understandings of genetic risk and test decision-making, located within holistic life frameworks including family or kin relationships, may vary considerably from clinical representations of these phenomena. The predictive test for Huntington's disease (HD), whilst specific to a single-gene, serious, mature-onset but currently untreatable disorder, is regarded as a model in this context. This paper reports upon a qualitative Australian study which investigated predictive test decision-making by individuals at risk for HD, the contexts of their decisions and the appraisals which underpinned them. In-depth interviews were conducted in Australia with 16 individuals at 50% risk for HD, with variation across testing decisions, gender, age and selected characteristics. Findings suggested predictive testing was regarded as a significant life decision with important implications for self and others, while the right not to know genetic status was staunchly and unanimously defended. Multiple contexts of reference were identified within which test decisions were located, including intra- and inter-personal frameworks, family history and experience of HID, and temporality. Participants used two main criteria in appraising test options: perceived value of, or need for the test information, for self and/or significant others, and degree to which such information could be tolerated and managed, short and long-term, by self and/or others. Selected moral and ethical considerations involved in decision-making are examined, as well as the clinical and socio-political contexts in which predictive testing is located. The paper argues that psychosocial vulnerabilities generated by the availability of testing technologies and exacerbated by policy imperatives towards individual responsibility and self-governance should be addressed at broader societal levels. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990–2000) period. Results from both pricingerror and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Intravenous (IV) fluid administration is an integral component of clinical care. Errors in administration can cause detrimental patient outcomes and increase healthcare costs, although little is known about medication administration errors associated with continuous IV infusions. Objectives: ( 1) To ascertain the prevalence of medication administration errors for continuous IV infusions and identify the variables that caused them. ( 2) To quantify the probability of errors by fitting a logistic regression model to the data. Methods: A prospective study was conducted on three surgical wards at a teaching hospital in Australia. All study participants received continuous infusions of IV fluids. Parenteral nutrition and non-electrolyte containing intermittent drug infusions ( such as antibiotics) were excluded. Medication administration errors and contributing variables were documented using a direct observational approach. Results: Six hundred and eighty seven observations were made, with 124 (18.0%) having at least one medication administration error. The most common error observed was wrong administration rate. The median deviation from the prescribed rate was 247 ml/h (interquartile range 275 to + 33.8 ml/ h). Errors were more likely to occur if an IV infusion control device was not used and as the duration of the infusion increased. Conclusions: Administration errors involving continuous IV infusions occur frequently. They could be reduced by more common use of IV infusion control devices and regular checking of administration rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive genetic testing for serious, mature-onset genetic illness represents a unique context in health decision making. This article presents findings from an exploratory qualitative Australian-based study into the decision making of individuals at risk for Huntington's disease (HD) with regard to predictive genetic testing. Sixteen in-depth interviews were conducted with a range of at-risk individuals. Data analysis revealed four discrete decision-making positions rather than a 'to test' or not to test' dichotomy. A conceptual dimension of (non-)openness and (non-)engagement characterized the various decisions. Processes of decision making and a concept of 'test readiness' were identified. Findings from this research, while not generalizable, are discussed in relation to theoretical frameworks and stage models of health decision making, as well as possible clinical implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to ascertain the most suitable dosing schedule for gentamicin in patients receiving hemodialysis. We developed a model to describe the concentrationtime course of gentamicin in patients receiving hemodialysis. Using the model, an optimal dosing schedule was evaluated. Various dosing regimens were compared in their ability to achieve maximum concentration (C-max, >= 8 mg/L) and area under the concentration time-curve (AUC >= 70 mg(.)h/L and <= 120 mg(.)h/L per 24 hours). The model was evaluated by comparing model predictions against real data collected retrospectively. Simulations from the model confirmed the benefits of predialysis dosing. The mean optimal dose was 230 mg administered immediately before dialysis. The model was found to have good predictive performance when simulated data were compared to data observed in real patients. In summary, a model was developed that describes gentamicin pharmacokinetics in patients receiving hemodialysis. Predialysis dosing provided a superior pharmacokinetic profile than did postdialysis dosing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.