939 resultados para receiver operating characteristic curve


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The Outpatient Bleeding Risk Index (OBRI) and the Kuijer, RIETE and Kearon scores are clinical prognostic scores for bleeding in patients receiving oral anticoagulants for venous thromboembolism (VTE). We prospectively compared the performance of these scores in elderly patients with VTE. METHODS: In a prospective multicenter Swiss cohort study, we studied 663 patients aged ≥ 65 years with acute VTE. The outcome was a first major bleeding at 90 days. We classified patients into three categories of bleeding risk (low, intermediate and high) according to each score and dichotomized patients as high vs. low or intermediate risk. We calculated the area under the receiver-operating characteristic (ROC) curve, positive predictive values and likelihood ratios for each score. RESULTS: Overall, 28 out of 663 patients (4.2%, 95% confidence interval [CI] 2.8-6.0%) had a first major bleeding within 90 days. According to different scores, the rate of major bleeding varied from 1.9% to 2.1% in low-risk, from 4.2% to 5.0% in intermediate-risk and from 3.1% to 6.6% in high-risk patients. The discriminative power of the scores was poor to moderate, with areas under the ROC curve ranging from 0.49 to 0.60 (P = 0.21). The positive predictive values and positive likelihood ratios were low and varied from 3.1% to 6.6% and from 0.72 to 1.59, respectively. CONCLUSION: In elderly patients with VTE, existing bleeding risk scores do not have sufficient accuracy and power to discriminate between patients with VTE who are at a high risk of short-term major bleeding and those who are not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study was to evaluate new electrocardiographic (ECG) criteria for discriminating between incomplete right bundle branch block (RBBB) and the Brugada types 2 and 3 ECG patterns. BACKGROUND: Brugada syndrome can manifest as either type 2 or type 3 pattern. The latter should be distinguished from incomplete RBBB, present in 3% of the population. METHODS: Thirty-eight patients with either type 2 or type 3 Brugada pattern that were referred for an antiarrhythmic drug challenge (AAD) were included. Before AAD, 2 angles were measured from ECG leads V(1) and/or V(2) showing incomplete RBBB: 1) α, the angle between a vertical line and the downslope of the r'-wave, and 2) β, the angle between the upslope of the S-wave and the downslope of the r'-wave. Baseline angle values, alone or combined with QRS duration, were compared between patients with negative and positive results on AAD. Receiver-operating characteristic curves were constructed to identify optimal discriminative cutoff values. RESULTS: The mean β angle was significantly smaller in the 14 patients with negative results on AAD compared to the 24 patients with positive results on AAD (36 ± 20° vs. 62 ± 20°, p < 0.01). Its optimal cutoff value was 58°, which yielded a positive predictive value of 73% and a negative predictive value of 87% for conversion to type 1 pattern on AAD; α was slightly less sensitive and specific compared with β. When the angles were combined with QRS duration, it tended to improve discrimination. CONCLUSIONS: In patients with suspected Brugada syndrome, simple ECG criteria can enable discrimination between incomplete RBBB and types 2 and 3 Brugada patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Early-onset sepsis (EOS) is one of the main causes for the admission of newborns to the neonatal intensive care unit. However, traditional infection markers are poor diagnostic markers of EOS. Pancreatic stone protein (PSP) is a promising sepsis marker in adults. The aim of this study was to investigate whether determining PSP improves the diagnosis of EOS in comparison with other infection markers. METHODS: This was a prospective multicentre study involving 137 infants with a gestational age of >34 weeks who were admitted with suspected EOS. PSP, procalcitonin (PCT), soluble human triggering receptor expressed on myeloid cells-1 (sTREM-1), macrophage migration inhibitory factor (MIF) and C-reactive protein (CRP) were measured at admission. Receiver-operating characteristic (ROC) curve analysis was performed. RESULTS: The level of PSP in infected infants was significantly higher than that in uninfected ones (median 11.3 vs. 7.5 ng/ml, respectively; p = 0.001). The ROC area under the curve was 0.69 [95 % confidence interval (CI) 0.59-0.80; p < 0.001] for PSP, 0.77 (95 % CI 0.66-0.87; p < 0.001) for PCT, 0.66 (95 % CI 0.55-0.77; p = 0.006) for CRP, 0.62 (0.51-0.73; p = 0.055) for sTREM-1 and 0.54 (0.41-0.67; p = 0.54) for MIF. PSP independently of PCT predicted EOS (p < 0.001), and the use of both markers concomitantly significantly increased the ability to diagnose EOS. A bioscore combining PSP (>9 ng/ml) and PCT (>2 ng/ml) was the best predictor of EOS (0.83; 95 % CI 0.74-0.93; p < 0.001) and resulted in a negative predictive value of 100 % and a positive predictive value of 71 %. CONCLUSIONS: In this prospective study, the diagnostic performance of PSP and PCT was superior to that of traditional markers and a combination bioscore improved the diagnosis of sepsis. Our findings suggest that PSP is a valuable biomarker in combination with PCT in EOS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The diagnosis of hypertension in children is difficult because of the multiple sex-, age-, and height-specific thresholds to define elevated blood pressure (BP). Blood pressure-to-height ratio (BPHR) has been proposed to facilitate the identification of elevated BP in children. OBJECTIVE: We assessed the performance of BPHR at a single screening visit to identify children with hypertension that is sustained elevated BP. METHOD: In a school-based study conducted in Switzerland, BP was measured at up to three visits in 5207 children. Children had hypertension if BP was elevated at the three visits. Sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) for the identification of hypertension were assessed for different thresholds of BPHR. The ability of BPHR at a single screening visit to discriminate children with and without hypertension was evaluated with receiver operating characteristic (ROC) curve analyses. RESULTS: The prevalence of systolic/diastolic hypertension was 2.2%. Systolic BPHR had a better performance to identify hypertension compared with diastolic BPHR (area under the ROC curve: 0.95 vs. 0.84). The highest performance was obtained with a systolic BPHR threshold set at 0.80 mmHg/cm (sensitivity: 98%; specificity: 85%; PPV: 12%; and NPV: 100%) and a diastolic BPHR threshold set at 0.45 mmHg/cm (sensitivity: 79%; specificity: 70%; PPV: 5%; and NPV: 99%). The PPV was higher among tall or overweight children. CONCLUSION: BPHR at a single screening visit had a high performance to identify hypertension in children, although the low prevalence of hypertension led to a low PPV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Prognosis prediction for resected primary colon cancer is based on the T-stage Node Metastasis (TNM) staging system. We investigated if four well-documented gene expression risk scores can improve patient stratification. METHODS: Microarray-based versions of risk-scores were applied to a large independent cohort of 688 stage II/III tumors from the PETACC-3 trial. Prognostic value for relapse-free survival (RFS), survival after relapse (SAR), and overall survival (OS) was assessed by regression analysis. To assess improvement over a reference, prognostic model was assessed with the area under curve (AUC) of receiver operating characteristic (ROC) curves. All statistical tests were two-sided, except the AUC increase. RESULTS: All four risk scores (RSs) showed a statistically significant association (single-test, P < .0167) with OS or RFS in univariate models, but with HRs below 1.38 per interquartile range. Three scores were predictors of shorter RFS, one of shorter SAR. Each RS could only marginally improve an RFS or OS model with the known factors T-stage, N-stage, and microsatellite instability (MSI) status (AUC gains < 0.025 units). The pairwise interscore discordance was never high (maximal Spearman correlation = 0.563) A combined score showed a trend to higher prognostic value and higher AUC increase for OS (HR = 1.74, 95% confidence interval [CI] = 1.44 to 2.10, P < .001, AUC from 0.6918 to 0.7321) and RFS (HR = 1.56, 95% CI = 1.33 to 1.84, P < .001, AUC from 0.6723 to 0.6945) than any single score. CONCLUSIONS: The four tested gene expression-based risk scores provide prognostic information but contribute only marginally to improving models based on established risk factors. A combination of the risk scores might provide more robust information. Predictors of RFS and SAR might need to be different.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In France and Finland, farmer's lung disease (FLD), a hypersensitivity pneumonitis common in agricultural areas, is mainly caused by Eurotium species. The presence of antibodies in patients' serum is an important criterion for diagnosis. Our study aimed to improve the serological diagnosis of FLD by using common fungal particles that pollute the farm environment as antigens. Fungal particles of the Eurotium species were observed in handled hay. A strain of Eurotium amstelodami was grown in vitro using selected culture media; and antigen extracts from sexual (ascospores), asexual (conidia), and vegetative (hyphae) forms were made. Antigens were tested by enzyme-linked immunosorbent assay (ELISA), which was used to test for immunoglobulin G antibodies from the sera of 17 FLD patients, 40 healthy exposed farmers, and 20 nonexposed controls. The antigens were compared by receiver operating characteristic analysis, and a threshold was then established. The ascospores contained in asci enclosed within cleistothecia were present in 38% of the hay blades observed; conidial heads of aspergillus were less prevalent. The same protocol was followed to make the three antigen extracts. A comparison of the results for FLD patients and exposed controls showed the area under the curve to be 0.850 for the ascospore antigen, 0.731 for the conidia, and 0.690 for the hyphae. The cutoffs that we determined, with the standard deviation for measures being taken into account, showed 67% for sensitivity and 92% for specificity with the ascospore antigen. In conclusion, the serological diagnosis of FLD by ELISA was improved by the adjunction of ascospore antigen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: This study aimed to assess the impact of individual comorbid conditions as well as the weight assignment, predictive properties and discriminating power of the Charlson Comorbidity Index (CCI) on outcome in patients with acute coronary syndrome (ACS). METHODS: A prospective multicentre observational study (AMIS Plus Registry) from 69 Swiss hospitals with 29 620 ACS patients enrolled from 2002 to 2012. The main outcome measures were in-hospital and 1-year follow-up mortality. RESULTS: Of the patients, 27% were female (age 72.1 ± 12.6 years) and 73% were male (64.2 ± 12.9 years). 46.8% had comorbidities and they were less likely to receive guideline-recommended drug therapy and reperfusion. Heart failure (adjusted OR 1.88; 95% CI 1.57 to 2.25), metastatic tumours (OR 2.25; 95% CI 1.60 to 3.19), renal diseases (OR 1.84; 95% CI 1.60 to 2.11) and diabetes (OR 1.35; 95% CI 1.19 to 1.54) were strong predictors of in-hospital mortality. In this population, CCI weighted the history of prior myocardial infarction higher (1 instead of -0.4, 95% CI -1.2 to 0.3 points) but heart failure (1 instead of 3.7, 95% CI 2.6 to 4.7) and renal disease (2 instead of 3.5, 95% CI 2.7 to 4.4) lower than the benchmark, where all comorbidities, age and gender were used as predictors. However, the model with CCI and age has an identical discrimination to this benchmark (areas under the receiver operating characteristic curves were both 0.76). CONCLUSIONS: Comorbidities greatly influenced clinical presentation, therapies received and the outcome of patients admitted with ACS. Heart failure, diabetes, renal disease or metastatic tumours had a major impact on mortality. CCI seems to be an appropriate prognostic indicator for in-hospital and 1-year outcomes in ACS patients. ClinicalTrials.gov Identifier: NCT01305785.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To better assess the diagnosis of an infection in patients presenting at an emergency department with peripheral blood leukocytosis (>10 x 10(9) cells/l) on laboratory testing. METHODS: We prospectively evaluated serum procalcitonin concentration (PCT), C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR). Patients were divided into two groups according to their final diagnosis: patients with infection and those without infection. PCT, CRP, and ESR were compared between these groups. Sensitivity, specificity, positive predictive values, negative predictive values, receiver operating characteristic curves, and areas under the curves were calculated for each biological measurement. RESULTS: Out of 173 patients, 99 (57%) had a final diagnosis of systemic infection. If a cutoff point of 0.5 ng/ml is considered, procalcitonin concentration had a sensitivity of 0.57, a specificity of 0.85, a negative predictive value of 0.59, and a positive predictive value of 0.84 for the diagnosis of a systemic infection. Adding CRP or ESR to PCT gave no more information (p=0.84). CONCLUSIONS: Only about half of the patients attending the emergency department with leukocytosis were suffering from an infection. Determination of the procalcitonin level may be useful for these patients, particularly in the case of a value higher than 0.5 ng/ml.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Chest wall syndrome (CWS), the main cause of chest pain in primary care practice, is most often an exclusion diagnosis. We developed and evaluated a clinical prediction rule for CWS. METHODS: Data from a multicenter clinical cohort of consecutive primary care patients with chest pain were used (59 general practitioners, 672 patients). A final diagnosis was determined after 12 months of follow-up. We used the literature and bivariate analyses to identify candidate predictors, and multivariate logistic regression was used to develop a clinical prediction rule for CWS. We used data from a German cohort (n = 1212) for external validation. RESULTS: From bivariate analyses, we identified six variables characterizing CWS: thoracic pain (neither retrosternal nor oppressive), stabbing, well localized pain, no history of coronary heart disease, absence of general practitioner's concern, and pain reproducible by palpation. This last variable accounted for 2 points in the clinical prediction rule, the others for 1 point each; the total score ranged from 0 to 7 points. The area under the receiver operating characteristic (ROC) curve was 0.80 (95% confidence interval 0.76-0.83) in the derivation cohort (specificity: 89%; sensitivity: 45%; cut-off set at 6 points). Among all patients presenting CWS (n = 284), 71% (n = 201) had a pain reproducible by palpation and 45% (n = 127) were correctly diagnosed. For a subset (n = 43) of these correctly classified CWS patients, 65 additional investigations (30 electrocardiograms, 16 thoracic radiographies, 10 laboratory tests, eight specialist referrals, one thoracic computed tomography) had been performed to achieve diagnosis. False positives (n = 41) included three patients with stable angina (1.8% of all positives). External validation revealed the ROC curve to be 0.76 (95% confidence interval 0.73-0.79) with a sensitivity of 22% and a specificity of 93%. CONCLUSIONS: This CWS score offers a useful complement to the usual CWS exclusion diagnosing process. Indeed, for the 127 patients presenting CWS and correctly classified by our clinical prediction rule, 65 additional tests and exams could have been avoided. However, the reproduction of chest pain by palpation, the most important characteristic to diagnose CWS, is not pathognomonic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We investigated changes in biomarkers of liver disease in HIV-HCV-coinfected individuals during successful combination antiretroviral therapy (cART) compared to changes in biomarker levels during untreated HIV infection and to HIV-monoinfected individuals. METHODS: Non-invasive biomarkers of liver disease (hyaluronic acid [HYA], aspartate aminotransferase-to-platelet ratio index [APRI], Fibrosis-4 [FIB-4] index and cytokeratin-18 [CK-18]) were correlated with liver histology in 49 HIV-HCV-coinfected patients. Changes in biomarkers over time were then assessed longitudinally in HIV-HCV-coinfected patients during successful cART (n=58), during untreated HIV-infection (n=59), and in HIV-monoinfected individuals (n=17). The median follow-up time was 3.4 years on cART. All analyses were conducted before starting HCV treatment. RESULTS: Non-invasive biomarkers of liver disease correlated significantly with the histological METAVIR stage (P<0.002 for all comparisons). The mean ±sd area under the receiver operating characteristic (AUROC) curve values for advanced fibrosis (≥F3 METAVIR) for HYA, APRI, FIB-4 and CK-18 were 0.86 ±0.05, 0.84 ±0.08, 0.80 ±0.09 and 0.81 ±0.07, respectively. HYA, APRI and CK-18 levels were higher in HIV-HCV-coinfected compared to HIV-monoinfected patients (P<0.01). In the first year on cART, APRI and FIB-4 scores decreased (-35% and -33%, respectively; P=0.1), mainly due to the reversion of HIV-induced thrombocytopaenia, whereas HYA and CK-18 levels remained unchanged. During long-term cART, there were only small changes (<5%) in median biomarker levels. Median biomarker levels changed <3% during untreated HIV-infection. Overall, 3 patients died from end-stage liver disease, and 10 from other causes. CONCLUSIONS: Biomarkers of liver disease highly correlated with fibrosis in HIV-HCV-coinfected individuals and did not change significantly during successful cART. These findings suggest a slower than expected liver disease progression in many HIV-HCV-coinfected individuals, at least during successful cART.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To evaluate the agreement between multislice CT (MSCT) and intravascular ultrasound (IVUS) to assess the in-stent lumen diameters and lumen areas of left main coronary artery (LMCA) stents. Design: Prospective, observational single centre study. Setting: A single tertiary referral centre. Patients: Consecutive patients with LMCA stenting excluding patients with atrial fibrillation and chronic renal failure. Interventions: MSCT and IVUS imaging at 912 months follow-up were performed for all patients. Main outcome measures: Agreement between MSCT and IVUS minimum luminal area (MLA) and minimum luminal diameter (MLD). A receiver operating characteristic (ROC) curve was plotted to find the MSCT cut-off point to diagnose binary restenosis equivalent to 6 mm2 by IVUS. Results: 52 patients were analysed. PassingBablok regression analysis obtained a β coefficient of 0.786 (0.586 to 1.071) for MLA and 1.250 (0.936 to 1.667) for MLD, ruling out proportional bias. The α coefficient was −3.588 (−8.686 to −0.178) for MLA and −1.713 (−3.583 to −0.257) for MLD, indicating an underestimation trend of MSCT. The ROC curve identified an MLA ≤4.7 mm2 as the best threshold to assess in-stent restenosis by MSCT. Conclusions: Agreement between MSCT and IVUS to assess in-stent MLA and MLD for LMCA stenting is good. An MLA of 4.7 mm2 by MSCT is the best threshold to assess binary restenosis. MSCT imaging can be considered in selected patients to assess LMCA in-stent restenosis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analysed the relationship between changes in land cover patterns and the Eurasian otter occurrence over the course of about 20 years (1985-2006) using multi-temporal Species Distribution Models (SDMs). The study area includes five river catchments covering most of the otter's Italian range. Land cover and topographic data were used as proxies of the ecological requirements of the otter within a 300-m buffer around river courses. We used species presence, pseudo-absence data, and environmental predictors to build past (1985) and current (2006) SDMs by applying an ensemble procedure through the BIOMOD modelling package. The performance of each model was evaluated by measuring the area under the curve (AUC) of the receiver-operating characteristic (ROC). Multi-temporal analyses of species distribution and land cover maps were performed by comparing the maps produced for 1985 and 2006. The ensemble procedure provided a good overall modelling accuracy, revealing that elevation and slope affected the otter's distribution in the past; in contrast, land cover predictors, such as cultivations and forests, were more important in the present period. During the transition period, 20.5% of the area became suitable, with 76% of the new otter presence data being located in these newly available areas. The multi-temporal analysis suggested that the quality of otter habitat improved in the last 20 years owing to the expansion of forests and to the reduction of cultivated fields in riparian belts. The evidence presented here stresses the great potential of riverine habitat restoration and environmental management for the future expansion of the otter in Italy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Minimal change disease (MCD) and focal segmental glomerulosclerosis (FSGS) are the most common causes of idiopathic nephrotic syndrome (INS). We have evaluated the reliability of urinary neutrophil-gelatinase-associated lipocalin (uNGAL), urinary alpha1-microglobulin (uα1M) and urinary N-acetyl-beta-D-glucosaminidase (uβNAG) as markers for differentiating MCD from FSGS. We have also evaluated whether these proteins are associated to INS relapses or to glomerular filtration rate (GFR). METHODS: The patient cohort comprised 35 children with MCD and nine with FSGS; 19 healthy age-matched children were included in the study as controls. Of the 35 patients, 28 were in remission (21 MCD, 7 FSGS) and 16 were in relapse (14 MCD, 2 FSGS). The prognostic accuracies of these proteins were assessed by receiver operating characteristic (ROC) curve analyses. RESULTS: The level of uNGAL, indexed or not to urinary creatinine (uCreat), was significantly different between children with INS and healthy children (p = 0.02), between healthy children and those with FSGS (p = 0.007) and between children with MCD and those with FSGS (p = 0.01). It was not significantly correlated to proteinuria or GFR levels. The ROC curve analysis showed that a cut-off value of 17 ng/mg for the uNGAL/uCreat ratio could be used to distinguish MCD from FSGS with a sensitivity of 0.77 and specificity of 0.78. uβNAG was not significantly different in patients with MCD and those with FSGS (p = 0.86). Only uα1M, indexed or not to uCreat, was significantly (p < 0.001) higher for patients in relapse compared to those in remission. CONCLUSIONS: Our results indicate that in our patient cohort uNGAL was a reliable biomarker for differentiating MCD from FSGS independently of proteinuria or GFR levels.