118 resultados para Predictive Analytics
Resumo:
The linear relationship between work accomplished (W-lim) and time to exhaustion (t(lim)) can be described by the equation: W-lim = a + CP.t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five art-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W-lim-t(lim) regression and calculated three ways: 1) using the first, third and fifth W-lim-t(lim) coordinates (I-135), 2) using coordinates from the three highest power outputs (I-123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I-345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0 +/- 37.9W) > CPI135 (176.1 +/- 27.6W) > CPI345 (164.0 +/- 22.8W) (P < 0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P < 0.05). The shorter the predictive trials, the greater the slope of the W-lim-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain for a very long time without fatigue then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.
Resumo:
1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.
Resumo:
In this study we present a novel automated strategy for predicting infarct evolution, based on MR diffusion and perfusion images acquired in the acute stage of stroke. The validity of this methodology was tested on novel patient data including data acquired from an independent stroke clinic. Regions-of-interest (ROIs) defining the initial diffusion lesion and tissue with abnormal hemodynamic function as defined by the mean transit time (MTT) abnormality were automatically extracted from DWI/PI maps. Quantitative measures of cerebral blood flow (CBF) and volume (CBV) along with ratio measures defined relative to the contralateral hemisphere (r(a)CBF and r(a)CBV) were calculated for the MTT ROIs. A parametric normal classifier algorithm incorporating these measures was used to predict infarct growth. The mean r(a)CBF and r(a)CBV values for eventually infarcted MTT tissue were 0.70 +/-0.19 and 1.20 +/-0.36. For recovered tissue the mean values were 0.99 +/-0.25 and 1.87 +/-0.71, respectively. There was a significant difference between these two regions for both measures (P
Resumo:
Objective: (1) To establish an incidence figure for dysphagia in a population of pediatric traumatic brain injury (TBI) cases; (2) to provide descriptive data on the admitting characteristics, patterns of resolution, and outcomes of children with and without dysphagia after TBI; and (3) to identify any factors present at admission that may predict dysphagia. Participants: A total of 1, 145 children consecutively admitted to an acute care setting for traumatic brain injury between July 1995 and July 2000. Main outcome measure: Medical parameters relating to dysphagia based on medical chart review. Results: (1) Dysphagia incidence figure of 5.3% across all pediatric head injury admissions. Incidence figures of 68% for severe TBI, 15% for moderate TBI, and only 1% for mild brain injury. (2) Statistically significant differences were found between the dysphagic and nondysphagic subgroups on the variables of length of stay, length of ventilation, Glasgow Coma Scale (GCS), computed tomography classification, duration of speech pathology intervention, supplemental feeding duration, duration until initiation of oral intake (DIOF), duration to total oral intake (DTOF), and period of time from the initiation of intake until achievement of total oral intake (DI-TOF). (3) Significant predictive factors for dysphagia included GCS < 8.5 and a ventilation period in excess of 1.5 days. Conclusion: The provision of incidence data and predictive factors for dysphagia will enable clinicians in acute care settings to allocate resources necessary to deal with the predicted number of dysphagia cases in a pediatric population, and assist in predicting patients who are at risk for dysphagia following TBI. Early detection of patients with swallowing dysfunction will be aided by these data, in turn helping to facilitate effective medical and speech pathology intervention via assisting the reduction of medical complications such as aspiration pneumonia.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Background/Aims: Insulin resistance and systemic hypertension are predictors of advanced fibrosis in obese patients with non-alcoholic fatty liver disease (NAFLD). Genetic factors may also be important. We hypothesize that high angiotensinogen (AT) and transforming growth factor-beta1 (TGF-beta1) producing genotypes increase the risk of liver fibrosis in obese subjects with NAFLD. Methods: One hundred and five of 130 consecutive severely obese patients having a liver biopsy at the time of laparoscopic obesity surgery agreed to have genotype analysis. Influence of specific genotype or combination of genotypes on the stage of hepatic fibrosis was assessed after controlling for known risk factors. Results: There was no fibrosis in 70 (67%), stages 1-2 in 21 (20%) and stages 3-4 fibrosis in 14 (13%) of subjects. There was no relationship between either high AT or TGF-beta1 producing genotypes alone and hepatic fibrosis after controlling for confounding factors. However, advanced hepatic fibrosis occurred in five of 13 subjects (odds ratio 5.7, 95% confidence interval 1.5-21.2, P = 0.005) who inherited both high AT and TGF-beta1 producing polymorphisms. Conclusions: The combination of high AT and TGF-beta1 producing polymorphisms is associated with advanced hepatic fibrosis in obese patients with NAFLD. These findings support the hypothesis that angiotensin II stimulated TGF-beta1 production may promote hepatic fibrosis. (C) 2003 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Resumo:
Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.
Resumo:
Predictive testing is one of the new genetic technologies which, in conjunction with developing fields such as pharmacogenomics, promises many benefits for preventive and population health. Understanding how individuals appraise and make genetic test decisions is increasingly relevant as the technology expands. Lay understandings of genetic risk and test decision-making, located within holistic life frameworks including family or kin relationships, may vary considerably from clinical representations of these phenomena. The predictive test for Huntington's disease (HD), whilst specific to a single-gene, serious, mature-onset but currently untreatable disorder, is regarded as a model in this context. This paper reports upon a qualitative Australian study which investigated predictive test decision-making by individuals at risk for HD, the contexts of their decisions and the appraisals which underpinned them. In-depth interviews were conducted in Australia with 16 individuals at 50% risk for HD, with variation across testing decisions, gender, age and selected characteristics. Findings suggested predictive testing was regarded as a significant life decision with important implications for self and others, while the right not to know genetic status was staunchly and unanimously defended. Multiple contexts of reference were identified within which test decisions were located, including intra- and inter-personal frameworks, family history and experience of HID, and temporality. Participants used two main criteria in appraising test options: perceived value of, or need for the test information, for self and/or significant others, and degree to which such information could be tolerated and managed, short and long-term, by self and/or others. Selected moral and ethical considerations involved in decision-making are examined, as well as the clinical and socio-political contexts in which predictive testing is located. The paper argues that psychosocial vulnerabilities generated by the availability of testing technologies and exacerbated by policy imperatives towards individual responsibility and self-governance should be addressed at broader societal levels. (C) 2003 Elsevier Science Ltd. All rights reserved.