931 resultados para PERFORMANCE PREDICTION
Resumo:
OBJECTIVE: The aim of this study was to determine whether V˙O(2) kinetics and specifically, the time constant of transitions from rest to heavy (τ(p)H) and severe (τ(p)S) exercise intensities, are related to middle distance swimming performance. DESIGN: Fourteen highly trained male swimmers (mean ± SD: 20.5 ± 3.0 yr; 75.4 ± 12.4 kg; 1.80 ± 0.07 m) performed an discontinuous incremental test, as well as square wave transitions for heavy and severe swimming intensities, to determine V˙O(2) kinetics parameters using two exponential functions. METHODS: All the tests involved front-crawl swimming with breath-by-breath analysis using the Aquatrainer swimming snorkel. Endurance performance was recorded as the time taken to complete a 400 m freestyle swim within an official competition (T400), one month from the date of the other tests. RESULTS: T400 (Mean ± SD) (251.4 ± 12.4 s) was significantly correlated with τ(p)H (15.8 ± 4.8s; r=0.62; p=0.02) and τ(p)S (15.8 ± 4.7s; r=0.61; p=0.02). The best single predictor of 400 m freestyle time, out of the variables that were assessed, was the velocity at V˙O(2max)vV˙O(2max), which accounted for 80% of the variation in performance between swimmers. However, τ(p)H and V˙O(2max) were also found to influence the prediction of T400 when they were included in a regression model that involved respiratory parameters only. CONCLUSIONS: Faster kinetics during the primary phase of the V˙O(2) response is associated with better performance during middle-distance swimming. However, vV˙O(2max) appears to be a better predictor of T400.
Resumo:
In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
Introduction: The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.Methods: Consecutive inpatients and outpatients with a clinical suspicion of DVT were prospectively enrolled. Pre-test DVT probability (low/intermediate/high) was determined using both scores. Patients with a non-high probability based on the original Wells score underwent D-dimers measurement. Patients with D-dimers <500 mu g/L did not undergo further testing, and treatment was withheld. All others underwent complete lower limb compression ultrasound, and those diagnosed with DVT were anticoagulated. The primary study outcome was objectively confirmed symptomatic venous thromboembolism within 3 months of enrollment.Results: 298 patients with suspected DVT were included. Of these, 82 (27.5%) had DVT, and 46 of them were proximal. Compared to the modified score, the original Wells score classified a higher proportion of patients as low-risk (53 vs 48%; p<0.01) and a lower proportion as high-risk (17 vs 15%; p=0.02); the prevalence of proximal DVT in each category was similar with both scores (7-8% low, 16-19% intermediate, 36-37% high). The area under the receiver operating characteristic curve regarding proximal DVT detection was similar for both scores, but they both performed poorly in predicting isolated distal DVT and DVT in inpatients.Conclusion: The study demonstrates that both Wells scores perform equally well in proximal DVT pre-test probability prediction. Neither score appears to be particularly useful in hospitalized patients and those with isolated distal DVT. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVES: Therapeutic hypothermia and pharmacological sedation may influence outcome prediction after cardiac arrest. The use of a multimodal approach, including clinical examination, electroencephalography, somatosensory-evoked potentials, and serum neuron-specific enolase, is recommended; however, no study examined the comparative performance of these predictors or addressed their optimal combination. DESIGN: Prospective cohort study. SETTING: Adult ICU of an academic hospital. PATIENTS: One hundred thirty-four consecutive adults treated with therapeutic hypothermia after cardiac arrest. MEASUREMENTS AND MAIN RESULTS: Variables related to the cardiac arrest (cardiac rhythm, time to return of spontaneous circulation), clinical examination (brainstem reflexes and myoclonus), electroencephalography reactivity during therapeutic hypothermia, somatosensory-evoked potentials, and serum neuron-specific enolase. Models to predict clinical outcome at 3 months (assessed using the Cerebral Performance Categories: 5 = death; 3-5 = poor recovery) were evaluated using ordinal logistic regressions and receiving operator characteristic curves. Seventy-two patients (54%) had a poor outcome (of whom, 62 died), and 62 had a good outcome. Multivariable ordinal logistic regression identified absence of electroencephalography reactivity (p < 0.001), incomplete recovery of brainstem reflexes in normothermia (p = 0.013), and neuron-specific enolase higher than 33 μg/L (p = 0.029), but not somatosensory-evoked potentials, as independent predictors of poor outcome. The combination of clinical examination, electroencephalography reactivity, and neuron-specific enolase yielded the best predictive performance (receiving operator characteristic areas: 0.89 for mortality and 0.88 for poor outcome), with 100% positive predictive value. Addition of somatosensory-evoked potentials to this model did not improve prognostic accuracy. CONCLUSIONS: Combination of clinical examination, electroencephalography reactivity, and serum neuron-specific enolase offers the best outcome predictive performance for prognostication of early postanoxic coma, whereas somatosensory-evoked potentials do not add any complementary information. Although prognostication of poor outcome seems excellent, future studies are needed to further improve prediction of good prognosis, which still remains inaccurate.
Resumo:
Overall introduction.- Longitudinal studies have been designed to investigate prospectively, from their beginning, the pathway leading from health to frailty and to disability. Knowledge about determinants of healthy ageing and health behaviour (resources) as well as risks of functional decline is required to propose appropriate preventative interventions. The functional status in older people is important considering clinical outcome in general, healthcare need and mortality. Part I.- Results and interventions from lucas (longitudinal urban cohort ageing study). Authors.- J. Anders, U. Dapp, L. Neumann, F. Pröfener, C. Minder, S. Golgert, A. Daubmann, K. Wegscheider,. W. von Renteln-Kruse Methods.- The LUCAS core project is a longitudinal cohort of urban community-dwelling people 60 years and older, recruited in 2000/2001. Further LUCAS projects are cross-sectional comparative and interventional studies (RCT). Results.- The emphasis will be on geriatric medical care in a population-based approach, discussing different forms of access, too. (Dapp et al. BMC Geriatrics 2012, 12:35; http://www.biomedcentral.com/1471-2318/12/35): - longitudinal data from the LUCAS urban cohort (n = 3.326) will be presented covering 10 years of observation, including the prediction of functional decline, need of nursing care, and mortality by using a self-filling screening tool; - interventions to prevent functional decline do focus on first (pre-clinical) signs of pre-frailty before entering the frailty-cascade ("Active Health Promotion in Old Age", "geriatric mobility centre") or disability ("home visits"). Conclusions.- The LUCAS research consortium was established to study particular aspects of functional competence, its changes with ageing, to detect pre-clinical signs of functional decline, and to address questions on how to maintain functional competence and to prevent adverse outcome in different settings. The multidimensional data base allows the exploration of several further questions. Gait performance was exmined by GAITRite®-System. Supported by the Federal Ministry for Education and Research (BMBF Funding No. 01ET1002A). Part II.- Selected results from the lausanne cohort 65+ (Lc65 + ) Study (Switzerland). Authors.- Prof Santos-Eggimann Brigitte, Dr Seematter-Bagnoud Laurence, Prof Büla Christophe, Dr Rochat Stéphane. Methods.- The Lc65+ cohort was launched in 2004 with the random selection of 3054 eligible individuals aged 65 to 70 (birth year 1934-1938) in the non-institutionalized population of Lausanne (Switzerland). Results.- Information is collected about life course social and health-related events, socio-economics, medical and psychosocial dimensions, lifestyle habits, limitations in activities of daily living, mobility impairments, and falls. Gait performance are objectively measured using body-fixed sensors. Frailty is assessed using Fried's frailty phenotype. Follow-up consists in annual self-completed questionnaires, as well as physical examination and physical and mental performance tests every three years. - Lausanne cohort 65+ (Lc65 + ): design and longitudinal outcomes. The baseline data collection was completed among 1422 participants in 2004-2005 through self-completed questionnaires, face-to-face interviews, physical examination and tests of mental and physical performances. Information about institutionalization, self-reported health services utilization, and death is also assessed. An additional random sample (n = 1525) of 65-70 years old subjects was recruited in 2009 (birth year 1939-1943). - lecture no 4: alcohol intake and gait parameters: prevalent and longitudinal association in the Lc65+ study. The association between alcohol intake and gait performance was investigated.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelationbetween variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics
Resumo:
Visible and near infrared (vis-NIR) spectroscopy is widely used to detect soil properties. The objective of this study is to evaluate the combined effect of moisture content (MC) and the modeling algorithm on prediction of soil organic carbon (SOC) and pH. Partial least squares (PLS) and the Artificial neural network (ANN) for modeling of SOC and pH at different MC levels were compared in terms of efficiency in prediction of regression. A total of 270 soil samples were used. Before spectral measurement, dry soil samples were weighed to determine the amount of water to be added by weight to achieve the specified gravimetric MC levels of 5, 10, 15, 20, and 25 %. A fiber-optic vis-NIR spectrophotometer (350-2500 nm) was used to measure spectra of soil samples in the diffuse reflectance mode. Spectra preprocessing and PLS regression were carried using Unscrambler® software. Statistica® software was used for ANN modeling. The best prediction result for SOC was obtained using the ANN (RMSEP = 0.82 % and RPD = 4.23) for soil samples with 25 % MC. The best prediction results for pH were obtained with PLS for dry soil samples (RMSEP = 0.65 % and RPD = 1.68) and soil samples with 10 % MC (RMSEP = 0.61 % and RPD = 1.71). Whereas the ANN showed better performance for SOC prediction at all MC levels, PLS showed better predictive accuracy of pH at all MC levels except for 25 % MC. Therefore, based on the data set used in the current study, the ANN is recommended for the analyses of SOC at all MC levels, whereas PLS is recommended for the analysis of pH at MC levels below 20 %.
Resumo:
SUMMARY: A top scoring pair (TSP) classifier consists of a pair of variables whose relative ordering can be used for accurately predicting the class label of a sample. This classification rule has the advantage of being easily interpretable and more robust against technical variations in data, as those due to different microarray platforms. Here we describe a parallel implementation of this classifier which significantly reduces the training time, and a number of extensions, including a multi-class approach, which has the potential of improving the classification performance. AVAILABILITY AND IMPLEMENTATION: Full C++ source code and R package Rgtsp are freely available from http://lausanne.isb-sib.ch/~vpopovic/research/. The implementation relies on existing OpenMP libraries.
Resumo:
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Resumo:
Aims: Plasma concentrations of imatinib differ largely between patients despite same dosage, owing to large inter-individual variability in pharmacokinetic (PK) parameters. As the drug concentration at the end of the dosage interval (Cmin) correlates with treatment response and tolerability, monitoring of Cmin is suggested for therapeutic drug monitoring (TDM) of imatinib. Due to logistic difficulties, random sampling during the dosage interval is however often performed in clinical practice, thus rendering the respective results not informative regarding Cmin values.Objectives: (I) To extrapolate randomly measured imatinib concentrations to more informative Cmin using classical Bayesian forecasting. (II) To extend the classical Bayesian method to account for correlation between PK parameters. (III) To evaluate the predictive performance of both methods.Methods: 31 paired blood samples (random and trough levels) were obtained from 19 cancer patients under imatinib. Two Bayesian maximum a posteriori (MAP) methods were implemented: (A) a classical method ignoring correlation between PK parameters, and (B) an extended one accounting for correlation. Both methods were applied to estimate individual PK parameters, conditional on random observations and covariate-adjusted priors from a population PK model. The PK parameter estimates were used to calculate trough levels. Relative prediction errors (PE) were analyzed to evaluate accuracy (one-sample t-test) and to compare precision between the methods (F-test to compare variances).Results: Both Bayesian MAP methods allowed non-biased predictions of individual Cmin compared to observations: (A) - 7% mean PE (CI95% - 18 to 4 %, p = 0.15) and (B) - 4% mean PE (CI95% - 18 to 10 %, p = 0.69). Relative standard deviations of actual observations from predictions were 22% (A) and 30% (B), i.e. comparable to the intraindividual variability reported. Precision was not improved by taking into account correlation between PK parameters (p = 0.22).Conclusion: Clinical interpretation of randomly measured imatinib concentrations can be assisted by Bayesian extrapolation to maximum likelihood Cmin. Classical Bayesian estimation can be applied for TDM without the need to include correlation between PK parameters. Both methods could be adapted in the future to evaluate other individual pharmacokinetic measures correlated to clinical outcomes, such as area under the curve(AUC).
Resumo:
This report is one of two products for this project with the other being a design guide. This report describes test results and comparative analysis from 16 different portland cement concrete (PCC) pavement sites on local city and county roads in Iowa. At each site the surface conditions of the pavement (i.e., crack survey) and foundation layer strength, stiffness, and hydraulic conductivity properties were documented. The field test results were used to calculate in situ parameters used in pavement design per SUDAS and AASHTO (1993) design methodologies. Overall, the results of this study demonstrate how in situ and lab testing can be used to assess the support conditions and design values for pavement foundation layers and how the measurements compare to the assumed design values. The measurements show that in Iowa, a wide range of pavement conditions and foundation layer support values exist. The calculated design input values for the test sites (modulus of subgrade reaction, coefficient of drainage, and loss of support) were found to be different than typically assumed. This finding was true for the full range of materials tested. The findings of this study support the recommendation to incorporate field testing as part of the process to field verify pavement design values and to consider the foundation as a design element in the pavement system. Recommendations are provided in the form of a simple matrix for alternative foundation treatment options if the existing foundation materials do not meet the design intent. The PCI prediction model developed from multi-variate analysis in this study demonstrated a link between pavement foundation conditions and PCI. The model analysis shows that by measuring properties of the pavement foundation, the engineer will be able to predict long term performance with higher reliability than by considering age alone. This prediction can be used as motivation to then control the engineering properties of the pavement foundation for new or re-constructed PCC pavements to achieve some desired level of performance (i.e., PCI) with time.
Resumo:
We present a machine learning approach to modeling bowing control parametercontours in violin performance. Using accurate sensing techniqueswe obtain relevant timbre-related bowing control parameters such as bowtransversal velocity, bow pressing force, and bow-bridge distance of eachperformed note. Each performed note is represented by a curve parametervector and a number of note classes are defined. The principal componentsof the data represented by the set of curve parameter vectors are obtainedfor each class. Once curve parameter vectors are expressed in the new spacedefined by the principal components, we train a model based on inductivelogic programming, able to predict curve parameter vectors used for renderingbowing controls. We evaluate the prediction results and show the potentialof the model by predicting bowing control parameter contours from anannotated input score.
Resumo:
BACKGROUND AND PURPOSE: Several prognostic scores have been developed to predict the risk of symptomatic intracranial hemorrhage (sICH) after ischemic stroke thrombolysis. We compared the performance of these scores in a multicenter cohort. METHODS: We merged prospectively collected data of patients with consecutive ischemic stroke who received intravenous thrombolysis in 7 stroke centers. We identified and evaluated 6 scores that can provide an estimate of the risk of sICH in hyperacute settings: MSS (Multicenter Stroke Survey); HAT (Hemorrhage After Thrombolysis); SEDAN (blood sugar, early infarct signs, [hyper]dense cerebral artery sign, age, NIH Stroke Scale); GRASPS (glucose at presentation, race [Asian], age, sex [male], systolic blood pressure at presentation, and severity of stroke at presentation [NIH Stroke Scale]); SITS (Safe Implementation of Thrombolysis in Stroke); and SPAN (stroke prognostication using age and NIH Stroke Scale)-100 positive index. We included only patients with available variables for all scores. We calculated the area under the receiver operating characteristic curve (AUC-ROC) and also performed logistic regression and the Hosmer-Lemeshow test. RESULTS: The final cohort comprised 3012 eligible patients, of whom 221 (7.3%) had sICH per National Institute of Neurological Disorders and Stroke, 141 (4.7%) per European Cooperative Acute Stroke Study II, and 86 (2.9%) per Safe Implementation of Thrombolysis in Stroke criteria. The performance of the scores assessed with AUC-ROC for predicting European Cooperative Acute Stroke Study II sICH was: MSS, 0.63 (95% confidence interval, 0.58-0.68); HAT, 0.65 (0.60-0.70); SEDAN, 0.70 (0.66-0.73); GRASPS, 0.67 (0.62-0.72); SITS, 0.64 (0.59-0.69); and SPAN-100 positive index, 0.56 (0.50-0.61). SEDAN had significantly higher AUC-ROC values compared with all other scores, except for GRASPS where the difference was nonsignificant. SPAN-100 performed significantly worse compared with other scores. The discriminative ranking of the scores was the same for the National Institute of Neurological Disorders and Stroke, and Safe Implementation of Thrombolysis in Stroke definitions, with SEDAN performing best, GRASPS second, and SPAN-100 worst. CONCLUSIONS: SPAN-100 had the worst predictive power, and SEDAN constantly the highest predictive power. However, none of the scores had better than moderate performance.