939 resultados para receiver operating characteristic curve
Resumo:
Despite its high incidence, patellofemoral pain etiology remains unclear. No prior study has compared surface electromyography frequency domain parameters and surface electromyography time domain variables, which have been used as a classic analysis of patellofemoral pain. Thirty one women with patellofemoral pain and twenty eight pain-free women were recruited. Each participant was asked to descend a seven step staircase and data from five successful trials were collected. During the task, the vastus medialis and vastus lateralis muscle activities were monitored by surface electromyography. The data were processed and analyzed in four variables of the frequency domain (median frequency, low, medium and high frequency bands) and three time domain variables (Automatic, Cross-correlation and Visual Onset between the vastus medialis and vastus lateralis muscles). Reliability, Receiver Operating Characteristic curves and regression models were performed. The medium frequency band was the most reliable variable and different between the groups for both muscles, also demonstrated the best values of sensitivity and sensibility, 72% and 69% for the vastus medialis and 68% and 62% for the vastus lateralis, respectively. The frequency variables predicted the pain of individuals with patellofemoral pain, 26% for the vastus medialis and 20% for the vastus lateralis, being better than the time variables, which achieved only 7%. The frequency domain parameters presented greater reliability, diagnostic accuracy and capacity to predict pain than the time domain variables during stair descent and might be a useful tool to diagnose individuals with patellofemoral pain.
Resumo:
Background The application and better understanding of traditional and new breast tumor biomarkers and prognostic factors are increasing due to the fact that they are able to identify individuals at high risk of breast cancer, who may benefit from preventive interventions. Also, biomarkers can make possible for physicians to design an individualized treatment for each patient. Previous studies showed that trace elements (TEs) determined by X-Ray Fluorescence (XRF) techniques are found in significantly higher concentrations in neoplastic breast tissues (malignant and benign) when compared with normal tissues. The aim of this work was to evaluate the potential of TEs, determined by the use of the Energy Dispersive X-Ray Fluorescence (EDXRF) technique, as biomarkers and prognostic factors in breast cancer. Methods By using EDXRF, we determined Ca, Fe, Cu, and Zn trace elements concentrations in 106 samples of normal and breast cancer tissues. Cut-off values for each TE were determined through Receiver Operating Characteristic (ROC) analysis from the TEs distributions. These values were used to set the positive or negative expression. This expression was subsequently correlated with clinical prognostic factors through Fisher’s exact test and chi-square test. Kaplan Meier survival curves were also evaluated to assess the effect of the expression of TEs in the overall patient survival. Results Concentrations of TEs are higher in neoplastic tissues (malignant and benign) when compared with normal tissues. Results from ROC analysis showed that TEs can be considered a tumor biomarker because, after establishing a cut-off value, it was possible to classify different tissues as normal or neoplastic, as well as different types of cancer. The expression of TEs was found statistically correlated with age and menstrual status. The survival curves estimated by the Kaplan-Meier method showed that patients with positive expression for Cu presented a poor overall survival (p < 0.001). Conclusions This study suggests that TEs expression has a great potential of application as a tumor biomarker, once it was revealed to be an effective tool to distinguish different types of breast tissues and to identify the difference between malignant and benign tumors. The expressions of all TEs were found statistically correlated with well-known prognostic factors for breast cancer. The element copper also showed statistical correlation with overall survival.
Resumo:
Background: Lynch syndrome (LS) is the most common form of inherited predisposition to colorectal cancer (CRC), accounting for 2-5% of all CRC. LS is an autosomal dominant disease characterized by mutations in the mismatch repair genes mutL homolog 1 (MLH1), mutS homolog 2 (MSH2), postmeiotic segregation increased 1 (PMS1), post-meiotic segregation increased 2 (PMS2) and mutS homolog 6 (MSH6). Mutation risk prediction models can be incorporated into clinical practice, facilitating the decision-making process and identifying individuals for molecular investigation. This is extremely important in countries with limited economic resources. This study aims to evaluate sensitivity and specificity of five predictive models for germline mutations in repair genes in a sample of individuals with suspected Lynch syndrome. Methods: Blood samples from 88 patients were analyzed through sequencing MLH1, MSH2 and MSH6 genes. The probability of detecting a mutation was calculated using the PREMM, Barnetson, MMRpro, Wijnen and Myriad models. To evaluate the sensitivity and specificity of the models, receiver operating characteristic curves were constructed. Results: Of the 88 patients included in this analysis, 31 mutations were identified: 16 were found in the MSH2 gene, 15 in the MLH1 gene and no pathogenic mutations were identified in the MSH6 gene. It was observed that the AUC for the PREMM (0.846), Barnetson (0.850), MMRpro (0.821) and Wijnen (0.807) models did not present significant statistical difference. The Myriad model presented lower AUC (0.704) than the four other models evaluated. Considering thresholds of >= 5%, the models sensitivity varied between 1 (Myriad) and 0.87 (Wijnen) and specificity ranged from 0 (Myriad) to 0.38 (Barnetson). Conclusions: The Barnetson, PREMM, MMRpro and Wijnen models present similar AUC. The AUC of the Myriad model is statistically inferior to the four other models.
Resumo:
Background: Cryptococcus neoformans causes meningitis and disseminated infection in healthy individuals, but more commonly in hosts with defective immune responses. Cell-mediated immunity is an important component of the immune response to a great variety of infections, including yeast infections. We aimed to evaluate a specific lymphocyte transformation assay to Cryptococcus neoformans in order to identify immunodeficiency associated to neurocryptococcosis (NCC) as primary cause of the mycosis. Methods: Healthy volunteers, poultry growers, and HIV-seronegative patients with neurocryptococcosis were tested for cellular immune response. Cryptococcal meningitis was diagnosed by India ink staining of cerebrospinal fluid and cryptococcal antigen test (Immunomycol-Inc, SP, Brazil). Isolated peripheral blood mononuclear cells were stimulated with C. neoformans antigen, C. albicans antigen, and pokeweed mitogen. The amount of H-3-thymidine incorporated was assessed, and the results were expressed as stimulation index (SI) and log SI, sensitivity, specificity, and cut-off value (receiver operating characteristics curve). We applied unpaired Student t tests to compare data and considered significant differences for p<0.05. Results: The lymphotoxin alpha showed a low capacity with all the stimuli for classifying patients as responders and non-responders. Lymphotoxin alpha stimulated by heated-killed antigen from patients with neurocryptococcosis was not affected by TCD4+ cell count, and the intensity of response did not correlate with the clinical evolution of neurocryptococcosis. Conclusion: Response to lymphocyte transformation assay should be analyzed based on a normal range and using more than one stimulator. The use of a cut-off value to classify patients with neurocryptococcosis is inadequate. Statistical analysis should be based on the log transformation of SI. A more purified antigen for evaluating specific response to C. neoformans is needed.
Resumo:
Objective: To determine the accuracy of the Timed Up and Go Test (TUGT) for screening the risk of falls among community-dwelling elderly individuals. Method: This is a prospective cohort study with a randomly by lots without reposition sample stratified by proportional partition in relation to gender involving 63 community-dwelling elderly individuals. Elderly individuals who reported having Parkinson's disease, a history of transitory ischemic attack, stroke and with a Mini Mental State Exam lower than the expected for the education level, were on a wheelchair and that reported a single fall in the previous six months were excluded. The TUGT, a mobility test, was the measure of interested and the occurrence of falls was the outcome. The performance of basic activities of daily living (ADL) and instrumental activities of daily living (IADL) was determined through the Older American Resources and Services, and the socio-demographic and clinical data were determined through the use of additional questionnaires. Receiver Operating Characteristic Curves were used to analyze the sensitivity and specificity of the TUGT. Results: Elderly individuals who fell had greater difficulties in ADL and IADL (p<0.01) and a slower performance on the TUGT (p=0.02). No differences were found in socio-demographic and clinical characteristics between fallers and non- fallers. Considering the different sensitivity and specificity, the best predictive value for discriminating elderly individuals who fell was 12.47 seconds [(RR= 3.2) 95% CI: 1.3- 7.7]. Conclusions: The TUGT proved to be an accurate measure for screening the risk of falls among elderly individuals. Although different from that reported in the international literature, the 12.47 second cutoff point seems to be a better predictive value for Brazilian elderly individuals.
Resumo:
Objective To evaluate the changes in tissue perfusion parameters in dogs with severe sepsis/septic shock in response to goal-directed hemodynamic optimization in the ICU and their relation to outcome. Design Prospective observational study. Setting ICU of a veterinary university medical center. Animals Thirty dogs with severe sepsis or septic shock caused by pyometra who underwent surgery and were admitted to the ICU. Measurements and Main Results Severe sepsis was defined as the presence of sepsis and sepsis-induced dysfunction of one or more organs. Septic shock was defined as the presence of severe sepsis plus hypotension not reversed with fluid resuscitation. After the presumptive diagnosis of sepsis secondary to pyometra, blood samples were collected and clinical findings were recorded. Volume resuscitation with 0.9% saline solution and antimicrobial therapy were initiated. Following abdominal ultrasonography and confirmation of increased uterine volume, dogs underwent corrective surgery. After surgery, the animals were admitted to the ICU, where resuscitation was guided by the clinical parameters, central venous oxygen saturation (ScvO2), lactate, and base deficit. Between survivors and nonsurvivors it was observed that the ScvO2, lactate, and base deficit on ICU admission were each related independently to death (P = 0.001, P = 0.030, and P < 0.001, respectively). ScvO2 and base deficit were found to be the best discriminators between survivors and nonsurvivors as assessed via receiver operator characteristic curve analysis. Conclusion Our study suggests that ScvO2 and base deficit are useful in predicting the prognosis of dogs with severe sepsis and septic shock; animals with a higher ScvO2 and lower base deficit at admission to the ICU have a lower probability of death.
Resumo:
The autoregressive (AR) estimator, a non-parametric method, is used to analyze functional magnetic resonance imaging (fMRI) data. The same method has been used, with success, in several other time series data analysis. It uses exclusively the available experimental data points to estimate the most plausible power spectra compatible with the experimental data and there is no need to make any assumption about non-measured points. The time series, obtained from fMRI block paradigm data, is analyzed by the AR method to determine the brain active regions involved in the processing of a given stimulus. This method is considerably more reliable than the fast Fourier transform or the parametric methods. The time series corresponding to each image pixel is analyzed using the AR estimator and the corresponding poles are obtained. The pole distribution gives the shape of power spectra, and the pixels with poles at the stimulation frequency are considered as the active regions. The method was applied in simulated and real data, its superiority is shown by the receiver operating characteristic curves which were obtained using the simulated data.
Resumo:
Abstract Background Smear negative pulmonary tuberculosis (SNPT) accounts for 30% of pulmonary tuberculosis cases reported yearly in Brazil. This study aimed to develop a prediction model for SNPT for outpatients in areas with scarce resources. Methods The study enrolled 551 patients with clinical-radiological suspicion of SNPT, in Rio de Janeiro, Brazil. The original data was divided into two equivalent samples for generation and validation of the prediction models. Symptoms, physical signs and chest X-rays were used for constructing logistic regression and classification and regression tree models. From the logistic regression, we generated a clinical and radiological prediction score. The area under the receiver operator characteristic curve, sensitivity, and specificity were used to evaluate the model's performance in both generation and validation samples. Results It was possible to generate predictive models for SNPT with sensitivity ranging from 64% to 71% and specificity ranging from 58% to 76%. Conclusion The results suggest that those models might be useful as screening tools for estimating the risk of SNPT, optimizing the utilization of more expensive tests, and avoiding costs of unnecessary anti-tuberculosis treatment. Those models might be cost-effective tools in a health care network with hierarchical distribution of scarce resources.
Resumo:
Abstract Background The application and better understanding of traditional and new breast tumor biomarkers and prognostic factors are increasing due to the fact that they are able to identify individuals at high risk of breast cancer, who may benefit from preventive interventions. Also, biomarkers can make possible for physicians to design an individualized treatment for each patient. Previous studies showed that trace elements (TEs) determined by X-Ray Fluorescence (XRF) techniques are found in significantly higher concentrations in neoplastic breast tissues (malignant and benign) when compared with normal tissues. The aim of this work was to evaluate the potential of TEs, determined by the use of the Energy Dispersive X-Ray Fluorescence (EDXRF) technique, as biomarkers and prognostic factors in breast cancer. Methods By using EDXRF, we determined Ca, Fe, Cu, and Zn trace elements concentrations in 106 samples of normal and breast cancer tissues. Cut-off values for each TE were determined through Receiver Operating Characteristic (ROC) analysis from the TEs distributions. These values were used to set the positive or negative expression. This expression was subsequently correlated with clinical prognostic factors through Fisher’s exact test and chi-square test. Kaplan Meier survival curves were also evaluated to assess the effect of the expression of TEs in the overall patient survival. Results Concentrations of TEs are higher in neoplastic tissues (malignant and benign) when compared with normal tissues. Results from ROC analysis showed that TEs can be considered a tumor biomarker because, after establishing a cut-off value, it was possible to classify different tissues as normal or neoplastic, as well as different types of cancer. The expression of TEs was found statistically correlated with age and menstrual status. The survival curves estimated by the Kaplan-Meier method showed that patients with positive expression for Cu presented a poor overall survival (p < 0.001). Conclusions This study suggests that TEs expression has a great potential of application as a tumor biomarker, once it was revealed to be an effective tool to distinguish different types of breast tissues and to identify the difference between malignant and benign tumors. The expressions of all TEs were found statistically correlated with well-known prognostic factors for breast cancer. The element copper also showed statistical correlation with overall survival.
Resumo:
Arterial pressure-based cardiac output monitors (APCOs) are increasingly used as alternatives to thermodilution. Validation of these evolving technologies in high-risk surgery is still ongoing. In liver transplantation, FloTrac-Vigileo (Edwards Lifesciences) has limited correlation with thermodilution, whereas LiDCO Plus (LiDCO Ltd.) has not been tested intraoperatively. Our goal was to directly compare the 2 proprietary APCO algorithms as alternatives to pulmonary artery catheter thermodilution in orthotopic liver transplantation (OLT). The cardiac index (CI) was measured simultaneously in 20 OLT patients at prospectively defined surgical landmarks with the LiDCO Plus monitor (CI(L)) and the FloTrac-Vigileo monitor (CI(V)). LiDCO Plus was calibrated according to the manufacturer's instructions. FloTrac-Vigileo did not require calibration. The reference CI was derived from pulmonary artery catheter intermittent thermodilution (CI(TD)). CI(V)-CI(TD) bias ranged from -1.38 (95% confidence interval = -2.02 to -0.75 L/minute/m(2), P = 0.02) to -2.51 L/minute/m(2) (95% confidence interval = -3.36 to -1.65 L/minute/m(2), P < 0.001), and CI(L)-CI(TD) bias ranged from -0.65 (95% confidence interval = -1.29 to -0.01 L/minute/m(2), P = 0.047) to -1.48 L/minute/m(2) (95% confidence interval = -2.37 to -0.60 L/minute/m(2), P < 0.01). For both APCOs, bias to CI(TD) was correlated with the systemic vascular resistance index, with a stronger dependence for FloTrac-Vigileo. The capability of the APCOs for tracking changes in CI(TD) was assessed with a 4-quadrant plot for directional changes and with receiver operating characteristic curves for specificity and sensitivity. The performance of both APCOs was poor in detecting increases and fair in detecting decreases in CI(TD). In conclusion, the calibrated and uncalibrated APCOs perform differently during OLT. Although the calibrated APCO is less influenced by changes in the systemic vascular resistance, neither device can be used interchangeably with thermodilution to monitor cardiac output during liver transplantation.
Resumo:
The prognostic relevance of quantitative an intracoronary occlusive electrocardiographic (ECG) ST-segment shift and its determinants have not been investigated in humans. In 765 patients with chronic stable coronary artery disease, the following simultaneous quantitative measurements were obtained during a 1-minute coronary balloon occlusion: intracoronary ECG ST-segment shift (recorded by angioplasty guidewire), mean aortic pressure, mean distal coronary pressure, and mean central venous pressure (CVP). Collateral flow index (CFI) was calculated as follows: (mean distal coronary pressure minus CVP)/(mean aortic pressure minus CVP). During an average follow-up duration of 50 ± 34 months, the cumulative mortality rate from all causes was significantly lower in the group with an ST-segment shift <0.1 mV (n = 89) than in the group with an ST-segment shift ≥0.1 mV (n = 676, p = 0.0211). Factors independently related to intracoronary occlusive ECG ST-segment shift <0.1 mV (r(2) = 0.189, p <0.0001) were high CFI (p <0.0001), intracoronary occlusive RR interval (p = 0.0467), right coronary artery as the ischemic region (p <0.0001), and absence of arterial hypertension (p = 0.0132). "High" CFI according to receiver operating characteristics analysis was ≥0.217 (area under receiver operating characteristics curve 0.647, p <0.0001). In conclusion, absence of ECG ST-segment shift during brief coronary occlusion in patients with chronic coronary artery disease conveys a decreased mortality and is directly influenced by a well-developed collateral supply to the right versus left coronary ischemic region and by the absence of systemic hypertension in a patient's history.
Resumo:
Background: In most patients with chronic heart failure (CHF), endurance training improves exercise capacity. However, some patients do not respond favourably. The purpose of this study was to explore the reasons of non-response and to determine their predictive value.Methods: We studied a cohort of 120 consecutive CHF patients with sinus rhythm (mean age 57 ± 12 years, ejection fraction 29.3 ± 9.9%, peak VO2 17.3 ± 5.1 ml/min/kg), participating in a 3-month outpatient cardiac rehabilitation programme. Responders were defined as subjects who improved peak VO2 by more than 5%, work load by more than 10%, or VE/VCO2 slope by more than 5%. Subjects who did not fulfil at least one of the above criteria were characterized as non-responders. Multivariate regression analyses were performed to identify parameters that were predictive for a response. Receiver operating characteristic (ROC) analyses were performed for predictive parameters to identify thresholds for response or non-response.Results: Multivariate regression analyses revealed heart rate (HR) reserve, HR recovery at 1 min, and peak HR as significant predictors for a positive training response. ROC curves revealed the optimal thresholds separating responders from non-responders at less than 30 bpm for HR reserve, less than 6 bpm for HR recovery and less than 101 bpm for peak HR.Conclusions: The presence of impaired chronotropic competence is a major predictor of poor training response in CHF patients with sinus rhythm.
Resumo:
BACKGROUND: Congestive heart failure (CHF) is a major public health problem. The use of B-type natriuretic peptide (BNP) tests shows promising diagnostic accuracy. Herein, we summarize the evidence on the accuracy of BNP tests in the diagnosis of CHF and compare the performance of rapid enzyme-linked immunosorbent assay (ELISA) and standard radioimmunosorbent assay (RIA) tests. METHODS: We searched electronic databases and the reference lists of included studies, and we contacted experts. Data were extracted on the study population, the type of test used, and methods. Receiver operating characteristic (ROC) plots and summary ROC curves were produced and negative likelihood ratios pooled. Random-effect meta-analysis and metaregression were used to combine data and explore sources of between-study heterogeneity. RESULTS: Nineteen studies describing 22 patient populations (9 ELISA and 13 RIA) and 9093 patients were included. The diagnosis of CHF was verified by echocardiography, radionuclide scan, or echocardiography combined with clinical criteria. The pooled negative likelihood ratio overall from random-effect meta-analysis was 0.18 (95% confidence interval [CI], 0.13-0.23). It was lower for the ELISA test (0.12; 95% CI, 0.09-0.16) than for the RIA test (0.23; 95% CI, 0.16-0.32). For a pretest probability of 20%, which is typical for patients with suspected CHF in primary care, a negative result of the ELISA test would produce a posttest probability of 2.9%; a negative RIA test, a posttest probability of 5.4%. CONCLUSIONS: The use of BNP tests to rule out CHF in primary care settings could reduce demand for echocardiography. The advantages of rapid ELISA tests need to be balanced against their higher cost.
Resumo:
OBJECTIVES: To validate the Probability of Repeated Admission (Pra) questionnaire, a widely used self-administered tool for predicting future healthcare use in older persons, in three European healthcare systems. DESIGN: Prospective study with 1-year follow-up. SETTING: Hamburg, Germany; London, United Kingdom; Canton of Solothurn, Switzerland. PARTICIPANTS: Nine thousand seven hundred thirteen independently living community-dwelling people aged 65 and older. MEASUREMENTS: Self-administered eight-item Pra questionnaire at baseline. Self-reported number of hospital admissions and physician visits during 1 year of follow-up. RESULTS: In the combined sample, areas under the receiver operating characteristic curves (AUCs) were 0.64 (95% confidence interval (CI)=0.62-0.66) for the prediction of one or more hospital admissions and 0.68 (95% CI=0.66-0.69) for the prediction of more than six physician visits during the following year. AUCs were similar between sites. In comparison, prediction models based on a person's age and sex alone exhibited poor predictive validity (AUC
Resumo:
A marker that is strongly associated with outcome (or disease) is often assumed to be effective for classifying individuals according to their current or future outcome. However, for this to be true, the associated odds ratio must be of a magnitude rarely seen in epidemiological studies. An illustration of the relationship between odds ratios and receiver operating characteristic (ROC) curves shows, for example, that a marker with an odds ratio as high as 3 is in fact a very poor classification tool. If a marker identifies 10 percent of controls as positive (false positives) and has an odds ratio of 3, then it will only correctly identify 25 percent of cases as positive (true positives). Moreover, the authors illustrate that a single measure of association such as an odds ratio does not meaningfully describe a marker’s ability to classify subjects. Appropriate statistical methods for assessing and reporting the classification power of a marker are described. The serious pitfalls of using more traditional methods based on parameters in logistic regression models are illustrated.