998 resultados para SCORE TESTS
Resumo:
BACKGROUND: Outcome following foot and ankle surgery can be assessed by disease- and region-specific scores. Many scoring systems exist, making comparison among studies difficult. The present study focused on outcome measures for a common foot and ankle abnormality and compared the results obtained by 2 disease-specific and 2 body region-specific scores. METHODS: We reviewed 41 patients who underwent lateral ankle ligament reconstruction. Four outcome scales were administered simultaneously: the Cumberland Ankle Instability Tool (CAIT) and the Chronic Ankle Instability Scale (CAIS), which are disease specific, and the American Orthopedic Foot & Ankle Society (AOFAS) hindfoot scale and the Foot and Ankle Ability Measure (FAAM), which are both body region-specific. The degree of correlation between scores was assessed by Pearson's correlation coefficient. Nonparametric tests, the Kruskal-Wallis and the Mann-Whitney test for pairwise comparison of the scores, were performed. RESULTS: A significant difference (P < .005) was observed between the CAIS and the AOFAS score (P = .0002), between the CAIS and the FAAM 1 (P = .0001), and between the CAIT and the AOFAS score (P = .0003). CONCLUSIONS: This study compared the performances of 4 disease- and body region-specific scoring systems. We demonstrated a correlation between the 4 administered scoring systems and notable differences between the results given by each of them. Disease-specific scores appeared more accurate than body region-specific scores. A strong correlation between the AOFAS score and the other scales was observed. The FAAM seemed a good compromise because it offered the possibility to evaluate the patient according to his or her own functional demand. CLINICAL RELEVANCE: The present study contributes to the development of more critical and accurate outcome assesment methods in foot and ankle surgery.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
Cyst-based ecotoxicological tests are simple and low-cost methods for assessing acute toxicity. Nevertheless, only a few comparative studies on their sensitivity are known. In the present study, the suitability of the use of two freshwater Anostracan species, Streptocephalus rubricaudatus and S. texanus, was assessed. The impact of 16 priority pollutants (4 heavy metals, 11 organic, and 1 organometallic compounds) on these two species, as well as on Artemia salina (Artoxkit M), Daphnia magna (International Organization for Standardization 6341), and S. proboscideus (Streptoxkit F) was assessed. For indicative comparison, bioassays using Brachionus calyciflorus (Rotoxkit F) and Photobacterium phosphoreum (Microtox) were also performed. For heavy metals (K2Cr2O7, Cd2+, Zn2+, Cu2+), the sensitivity of the two studied Streptocephalus species was slightly higher than that of D. magna. It was significantly more elevated than for the marine A. salina. For organic and organometallic micropollutants [phenol, 3,5-dichlorophenol, pentachlorophenol (PCP), hydroquinone, linear alkylbenzene sulfonate, sodium dodecyl sulfate, tributylphosphate, dimethylphthalate, atrazine, lindane, malathion, tributyltin chloride (TBT-Cl)], the sensitivity of the 4 anostracan species was of the same order of magnitude as that of D. magna. Artemia salina was slightly less sensitive to some organic compounds (PCP, hydroquinone, TBT-Cl). The sensitivity of S. rubricaudatus to organic solvents was low. On the other hand, this anostracan was quite sensitive to NaCl. Thus, its use is restricted to freshwater samples. The evaluation of global practicability of these two tests confirms that cyst-based freshwater anostracans may be used to perform low-cost tests at a sensitivity comparable to that of D. magna (24 h immobilization test).
Resumo:
This study aimed to compare two different maximal incremental tests with different time durations [a maximal incremental ramp test with a short time duration (8-12 min) (STest) and a maximal incremental test with a longer time duration (20-25 min) (LTest)] to investigate whether an LTest accurately assesses aerobic fitness in class II and III obese men. Twenty obese men (BMI≥35 kg.m-2) without secondary pathologies (mean±SE; 36.7±1.9 yr; 41.8±0.7 kg*m-2) completed an STest (warm-up: 40 W; increment: 20 W*min-1) and an LTest [warm-up: 20% of the peak power output (PPO) reached during the STest; increment: 10% PPO every 5 min until 70% PPO was reached or until the respiratory exchange ratio reached 1.0, followed by 15 W.min-1 until exhaustion] on a cycle-ergometer to assess the peak oxygen uptake [Formula: see text] and peak heart rate (HRpeak) of each test. There were no significant differences in [Formula: see text] (STest: 3.1±0.1 L*min-1; LTest: 3.0±0.1 L*min-1) and HRpeak (STest: 174±4 bpm; LTest: 173±4 bpm) between the two tests. Bland-Altman plot analyses showed good agreement and Pearson product-moment and intra-class correlation coefficients showed a strong correlation between [Formula: see text] (r=0.81 for both; p≤0.001) and HRpeak (r=0.95 for both; p≤0.001) during both tests. [Formula: see text] and HRpeak assessments were not compromised by test duration in class II and III obese men. Therefore, we suggest that the LTest is a feasible test that accurately assesses aerobic fitness and may allow for the exercise intensity prescription and individualization that will lead to improved therapeutic approaches in treating obesity and severe obesity.
Resumo:
The application of organic wastes to agricultural soils is not risk-free and can affect soil invertebrates. Ecotoxicological tests based on the behavioral avoidance of earthworms and springtails were performed to evaluate effects of different fertilization strategies on soil quality and habitat function for soil organisms. These tests were performed in soils treated with: i) slurry and chemical fertilizers, according to the conventional fertilization management of the region, ii) conventional fertilization + sludge and iii) unfertilized reference soil. Both fertilization strategies contributed to soil acidity mitigation and caused no increase in soil heavy metal content. Avoidance test results showed no negative effects of these strategies on soil organisms, compared with the reference soil. However, results of the two fertilization managements differed: Springtails did not avoid soils fertilized with dairy sludge in any of the tested combinations. Earthworms avoided soils treated with sludge as of May 2004 (DS1), when compared with conventional fertilization. Possibly, the behavioral avoidance of earthworms is more sensitive to soil properties (other than texture, organic matter and heavy metal content) than springtails
Resumo:
BACKGROUND AND PURPOSE: The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. METHODS: Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. RESULTS: Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). CONCLUSIONS: The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Resumo:
BACKGROUND: We aimed to assess the value of a structured clinical assessment and genetic testing for refining the diagnosis of abacavir hypersensitivity reactions (ABC-HSRs) in a routine clinical setting. METHODS: We performed a diagnostic reassessment using a structured patient chart review in individuals who had stopped ABC because of suspected HSR. Two HIV physicians blinded to the human leukocyte antigen (HLA) typing results independently classified these individuals on a scale between 3 (ABC-HSR highly likely) and -3 (ABC-HSR highly unlikely). Scoring was based on symptoms, onset of symptoms and comedication use. Patients were classified as clinically likely (mean score > or =2), uncertain (mean score > or = -1 and < or = 1) and unlikely (mean score < or = -2). HLA typing was performed using sequence-based methods. RESULTS: From 131 reassessed individuals, 27 (21%) were classified as likely, 43 (33%) as unlikely and 61 (47%) as uncertain ABC-HSR. Of the 131 individuals with suspected ABC-HSR, 31% were HLA-B*5701-positive compared with 1% of 140 ABC-tolerant controls (P < 0.001). HLA-B*5701 carriage rate was higher in individuals with likely ABC-HSR compared with those with uncertain or unlikely ABC-HSR (78%, 30% and 5%, respectively, P < 0.001). Only six (7%) HLA-B*5701-negative individuals were classified as likely HSR after reassessment. CONCLUSIONS: HLA-B*5701 carriage is highly predictive of clinically diagnosed ABC-HSR. The high proportion of HLA-B*5701-negative individuals with minor symptoms among individuals with suspected HSR indicates overdiagnosis of ABC-HSR in the era preceding genetic screening. A structured clinical assessment and genetic testing could reduce the rate of inappropriate ABC discontinuation and identify individuals at high risk for ABC-HSR.
Resumo:
BACKGROUND: No prior studies have identified which patients with deep vein thrombosis in the lower limbs are at a low risk for adverse events within the first week of therapy. METHODS: We used data from the Registro Informatizado de la Enfermedad TromboEmbólica (RIETE) to identify patients at low risk for the composite outcome of pulmonary embolism, major bleeding, or death within the first week. We built a prognostic score and compared it with the decision to treat patients at home. RESULTS: As of December 2013, 15,280 outpatients with deep vein thrombosis had been enrolled. Overall, 5164 patients (34%) were treated at home. Of these, 12 (0.23%) had pulmonary embolism, 8 (0.15%) bled, and 4 (0.08%) died. On multivariable analysis, chronic heart failure, recent immobility, recent bleeding, cancer, renal insufficiency, and abnormal platelet count independently predicted the risk for the composite outcome. Among 11,430 patients (75%) considered to be at low risk, 15 (0.13%) suffered pulmonary embolism, 22 (0.19%) bled, and 8 (0.07%) died. The C-statistic was 0.61 (95% confidence interval [CI], 0.57-0.65) for the decision to treat patients at home and 0.76 (95% CI, 0.72-0.79) for the score (P = .003). Net reclassification improvement was 41% (P < .001). Integrated discrimination improvement was 0.034 for the score and 0.015 for the clinical decision (P < .001). CONCLUSIONS: Using 6 easily available variables, we identified outpatients with deep vein thrombosis at low risk for adverse events within the first week. These data may help to safely treat more patients at home. This score, however, should be validated.
Resumo:
The aim of this study was to determine potential relationships between anthropometric parameters and athletic performance with special consideration to repeated-sprint ability (RSA). Sixteen players of the senior male Qatar national soccer team performed a series of anthropometric and physical tests including countermovement jumps without (CMJ) and with free arms (CMJwA), straight-line 20 m sprint, RSA (6 × 35 m with 10 s recovery) and incremental field test. Significant (P < 0.05) relationships occurred between muscle-to-bone ratio and both CMJs height (r ranging from 0.56 to 0.69) as well as with all RSA-related variables (r < -0.53 for sprinting times and r = 0.54 for maximal sprinting speed) with the exception of the sprint decrement score (Sdec). The sum of six skinfolds and adipose mass index were largely correlated with Sdec (r = 0.68, P < 0.01 and r = 0.55, P < 0.05, respectively) but not with total time (TT, r = 0.44 and 0.33, P > 0.05, respectively) or any standard athletic tests. Multiple regression analyses indicated that muscular cross-sectional area for mid-thigh, adipose index, straight-line 20 m time, maximal sprinting speed and CMJwA are the strongest predictors of Sdec (r(2) = 0.89) and TT (r(2) = 0.95) during our RSA test. In the Qatar national soccer team, players' power-related qualities and RSA are associated with a high muscular profile and a low adiposity. This supports the relevance of explosive power for the soccer players and the larger importance of neuromuscular qualities determining the RSA.
Resumo:
ABSTRACT: Pharmacogenetic tests and therapeutic drug monitoring may considerably improve the pharmacotherapy of depression. The aim of this study was to evaluate the relationship between the efficacy of mirtazapine (MIR) and the steady-state plasma concentrations of its enantiomers and metabolites in moderately to severely depressed patients, taking their pharmacogenetic status into account. Inpatients and outpatients (n = 45; mean age, 51 years; range, 19-79 years) with major depressive episode received MIR for 8 weeks (30 mg/d on days 1-14 and 30-45 mg/d on days 15-56). Mirtazapine treatment resulted in a significant improvement in mean Hamilton Depression Rating Scale total score at the end of the study (P < 0.0001). There was no evidence for a significant plasma concentration-clinical effectiveness relationship regarding any pharmacokinetic parameter. The enantiomers of MIR and its hydroxylated (OH-MIR) and demethylated (DMIR) metabolites in plasma samples on days 14 and 56 were influenced by sex and age. Nonsmokers (n = 28) had higher mean MIR plasma levels than smokers (n = 17): S(+)-enantiomer of MIR, 9.4 (SD, 3.9) versus 6.2 (SD, 5.5) ng/mL (P = 0.005); R(-)-enantiomer of MIR, 24.4 (SD, 6.5) versus 18.5 (SD, 4.1) ng/mL (P = 0.003). Only in nonsmokers, plasma levels of S(+)-enantiomer of MIR and metabolites depended on the CYP2D6 genotype. Therefore, high CYP1A2 activity seen in smokers seems to mask the influence of the CYP2D6 genotype. In patients presenting the CYP2B6 *6/*6 genotype (n = 8), S-OH-MIR concentrations were higher those in the other patients (n = 37). Although it is not known if S-OH-MIR is associated with the therapeutic effect of MIR, the reduction of the Hamilton scores was significantly (P = 0.016) more pronounced in the CYP2B6 *6/*6-genotyped patients at the end of the study. The role of CYP2B6 in the metabolism and effectiveness of MIR should be further investigated.
Resumo:
The cropping system influences the interception of water by plants, water storage in depressions on the soil surface, water infiltration into the soil and runoff. The aim of this study was to quantify some hydrological processes under no tillage cropping systems at the edge of a slope, in 2009 and 2010, in a Humic Dystrudept soil, with the following treatments: corn, soybeans, and common beans alone; and intercropped corn and common bean. Treatments consisted of four simulated rainfall tests at different times, with a planned intensity of 64 mm h-1 and 90 min duration. The first test was applied 18 days after sowing, and the others at 39, 75 and 120 days after the first test. Different times of the simulated rainfall and stages of the crop cycle affected soil water content prior to the rain, and the time runoff began and its peak flow and, thus, the surface hydrological processes. The depth of the runoff and the depth of the water intercepted by the crop + soil infiltration + soil surface storage were affected by the crop systems and the rainfall applied at different times. The corn crop was the most effective treatment for controlling runoff, with a water loss ratio of 0.38, equivalent to 75 % of the water loss ratio exhibited by common bean (0.51), the least effective treatment in relation to the others. Total water loss by runoff decreased linearly with an increase in the time that runoff began, regardless of the treatment; however, soil water content on the gravimetric basis increased linearly from the beginning to the end of the rainfall.
Resumo:
BACKGROUND: Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care. METHODS: We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort). RESULTS: The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83-0.91) for the derivation cohort and 0.90 (95% CI 0.87-0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3-5 points; negative result <or= 2 points), which had a sensitivity of 87.1% (95% CI 79.9%-94.2%) and a specificity of 80.8% (77.6%-83.9%). INTERPRETATION: The prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.