962 resultados para Diagnosis of Health Situation
Resumo:
BACKGROUND: In clinical practise the high dose ACTH stimulation test (HDT) is frequently used in the assessment of adrenal insufficiency (AI). However, there is uncertainty regarding optimal time-points and number of blood samplings. The present study compared the utility of a single cortisol value taken either 30 or 60 minutes after ACTH stimulation with the traditional interpretation of the HDT. METHODS: Retrospective analysis of 73 HDT performed at a single tertiary endocrine centre. Serum cortisol was measured at baseline, 30 and 60 minutes after intravenous administration of 250 µg synthetic ACTH1-24. Adrenal insufficiency (AI) was defined as a stimulated cortisol level <550 nmol/l. RESULTS: There were twenty patients (27.4%) who showed an insufficient rise in serum cortisol using traditional HDT criteria and were diagnosed to suffer from AI. There were ten individuals who showed insufficient cortisol values after 30 minutes, rising to sufficient levels at 60 minutes. All patients revealing an insufficient cortisol response result after 60 minutes also had an insufficient result after 30 minutes. The cortisol value taken after 30 minutes did not add incremental diagnostic value in any of the cases under investigation compared with the 60 minutes' sample. CONCLUSIONS: Based on the findings of the present analysis the utility of a cortisol measurement 30 minutes after high dose ACTH injection was low and did not add incremental diagnostic value to a single measurement after 60 minutes.
Resumo:
BACKGROUND Little is known about follow-up care attendance of adolescent survivors of childhood cancer, and which factors foster or hinder attendance. Attending follow-up care is especially important for adolescent survivors to allow for a successful transition into adult care. We aimed to (i) describe the proportion of adolescent survivors attending follow-up care; (ii) describe adolescents' health beliefs; and (iii) identify the association of health beliefs, demographic, and medical factors with follow-up care attendance. PROCEDURE Of 696 contacted adolescent survivors diagnosed with cancer at ≤16 years of age, ≥5 years after diagnosis, and aged 16-21 years at study, 465 (66.8%) completed the Swiss Childhood Cancer Survivor Study questionnaire. We assessed follow-up care attendance and health beliefs, and extracted demographic and medical information from the Swiss Childhood Cancer Registry. Cross-sectional data were analyzed using descriptive statistics and logistic regression models. RESULTS Overall, 56% of survivors reported attending follow-up care. Most survivors (80%) rated their susceptibility for late effects as low and believed that follow-up care may detect and prevent late effects (92%). Few (13%) believed that follow-up care is not necessary. Two health beliefs were associated with follow-up care attendance (perceived benefits: odds ratio [OR]: 1.56; 95% confidence interval [CI]: 1.07-2.27; perceived barriers: OR: 0.70; 95%CI: 0.50-1.00). CONCLUSIONS We show that health beliefs are associated with actual follow-up care attendance of adolescent survivors of childhood cancer. A successful model of health promotion in adolescent survivors should, therefore, highlight the benefits and address the barriers to keep adolescent survivors in follow-up care. Pediatr Blood Cancer © 2015 Wiley Periodicals, Inc.
Resumo:
The present report describes a real-time PCR-based procedure to reliably determine the quantity of Leishmania amastigotes in relation to the amount of host tissue in histological skin sections from canine and equine cases of cutaneous leishmaniasis. The novel diagnostic Leishmania-PCR has a detection limit of <0.02 amastigotes per μg tissue, which corresponds well to the detection limit of immunohistochemistry and is far beyond that of conventional histology. Our results emphasise the importance of PCR to complement routine histology of cutaneous leishmaniasis cases, particularly in laboratories in which no immunohistochemical assay is available.
Resumo:
BACKGROUND Arthroscopy is considered as "the gold standard" for the diagnosis of traumatic intraarticular knee lesions. However, recent developments in magnetic resonance imaging (MRI) now offer good opportunities for the indirect assessment of the integrity and structural changes of the knee articular cartilage. The study was to investigate whether cartilage-specific sequences on a 3-Tesla MRI provide accurate assessment for the detection of cartilage defects. METHODS A 3-Tesla (3-T) MRI combined with three-dimensional double-echo steady-state (3D-DESS) cartilage specific sequences was performed on 210 patients with knee pain prior to knee arthroscopy. Sensitivity, specificity, and positive and negative predictive values of magnetic resonance imaging were calculated and correlated to the arthroscopic findings of cartilaginous lesions. Lesions were classified using the modified Outerbridge classification. RESULTS For the 210 patients (1260 cartilage surfaces: patella, trochlea, medial femoral condyle, medial tibia, lateral femoral condyle, lateral tibia) evaluated, the sensitivities, specificities, positive predictive values, and negative predictive values of 3-T MRI were 83.3, 99.8, 84.4, and 99.8 %, respectively, for the detection of grade IV lesions; 74.1, 99.6, 85.2, and 99.3 %, respectively, for grade III lesions; 67.9, 99.2, 76.6, and 98.2 %, respectively, for grade II lesions; and 8.8, 99.5, 80, and 92 %, respectively, for grade I lesions. CONCLUSIONS For grade III and IV lesions, 3-T MRI combined with 3D-DESS cartilage-specific sequences represents an accurate diagnostic tool. For grade II lesions, the technique demonstrates moderate sensitivity, while for grade I lesions, the sensitivity is limited to provide reliable diagnosis compared to knee arthroscopy.
Resumo:
A common debate among dermatopathologists is that prior knowledge of the clinical picture of melanocytic skin neoplasms may introduce a potential bias in the histopathologic examination. Histologic slides from 99 melanocytic skin neoplasms were circulated among 10 clinical dermatologists, all of them formally trained and board-certified dermatopathologists: 5 dermatopathologists had clinical images available after a 'blind' examination (Group 1); the other 5 had clinical images available before microscopic examination (Group 2). Data from the two groups were compared regarding 'consensus' (a diagnosis in agreement by ≥4 dermatopathologists/group), chance-corrected interobserver agreement (Fleiss' k) and level of diagnostic confidence (LDC: a 1-5 arbitrary scale indicating 'increasing reliability' of any given diagnosis). Compared with Group 1 dermatopathologists, Group 2 achieved a lower number of consensus (84 vs. 90) but a higher k value (0.74 vs. 0.69) and a greater mean LDC value (4.57 vs. 4.32). The same consensus was achieved by the two groups in 81/99 cases. Spitzoid neoplasms were most frequently controversial for both groups. The histopathologic interpretation of melanocytic neoplasms seems to be not biased by the knowledge of the clinical picture before histopathologic examination.
Resumo:
AIM To evaluate the diagnostic value (sensitivity, specificity) of positron emission mammography (PEM) in a single site non-interventional study using the maximum PEM uptake value (PUVmax). PATIENTS, METHODS In a singlesite, non-interventional study, 108 patients (107 women, 1 man) with a total of 151 suspected lesions were scanned with a PEM Flex Solo II (Naviscan) at 90 min p.i. with 3.5 MBq 18F-FDG per kg of body weight. In this ROI(region of interest)-based analysis, maximum PEM uptake value (PUV) was determined in lesions, tumours (PUVmaxtumour), benign lesions (PUVmaxnormal breast) and also in healthy tissues on the contralateral side (PUVmaxcontralateral breast). These values were compared and contrasted. In addition, the ratios of PUVmaxtumour / PUVmaxcontralateral breast and PUVmaxnormal breast / PUVmaxcontralateral breast were compared. The image data were interpreted independently by two experienced nuclear medicine physicians and compared with histology in cases of suspected carcinoma. RESULTS Based on a criteria of PUV>1.9, 31 out of 151 lesions in the patient cohort were found to be malignant (21%). A mean PUVmaxtumour of 3.78 ± 2.47 was identified in malignant tumours, while a mean PUVmaxnormal breast of 1.17 ± 0.37 was reported in the glandular tissue of the healthy breast, with the difference being statistically significant (p < 0.001). Similarly, the mean ratio between tumour and healthy glandular tissue in breast cancer patients (3.15 ± 1.58) was found to be significantly higher than the ratio for benign lesions (1.17 ± 0.41, p < 0.001). CONCLUSION PEM is capable of differentiating breast tumours from benign lesions with 100% sensitivity along with a high specificity of 96%, when a threshold of PUVmax >1.9 is applied.
Resumo:
CONTEXT The polyuria-polydipsia syndrome comprises primary polydipsia (PP) and central and nephrogenic diabetes insipidus (DI). Correctly discriminating these entities is mandatory, given that inadequate treatment causes serious complications. The diagnostic "gold standard" is the water deprivation test with assessment of arginine vasopressin (AVP) activity. However, test interpretation and AVP measurement are challenging. OBJECTIVE The objective was to evaluate the accuracy of copeptin, a stable peptide stoichiometrically cosecreted with AVP, in the differential diagnosis of polyuria-polydipsia syndrome. DESIGN, SETTING, AND PATIENTS This was a prospective multicenter observational cohort study from four Swiss or German tertiary referral centers of adults >18 years old with the history of polyuria and polydipsia. MEASUREMENTS A standardized combined water deprivation/3% saline infusion test was performed and terminated when serum sodium exceeded 147 mmol/L. Circulating copeptin and AVP levels were measured regularly throughout the test. Final diagnosis was based on the water deprivation/saline infusion test results, clinical information, and the treatment response. RESULTS Fifty-five patients were enrolled (11 with complete central DI, 16 with partial central DI, 18 with PP, and 10 with nephrogenic DI). Without prior thirsting, a single baseline copeptin level >21.4 pmol/L differentiated nephrogenic DI from other etiologies with a 100% sensitivity and specificity, rendering a water deprivation testing unnecessary in such cases. A stimulated copeptin >4.9 pmol/L (at sodium levels >147 mmol/L) differentiated between patients with PP and patients with partial central DI with a 94.0% specificity and a 94.4% sensitivity. A stimulated AVP >1.8 pg/mL differentiated between the same categories with a 93.0% specificity and a 83.0% sensitivity. LIMITATION This study was limited by incorporation bias from including AVP levels as a diagnostic criterion. CONCLUSION Copeptin is a promising new tool in the differential diagnosis of the polyuria-polydipsia syndrome, and a valid surrogate marker for AVP. Primary Funding Sources: Swiss National Science Foundation, University of Basel.
Resumo:
Primary ciliary dyskinesia is a rare heterogeneous recessive genetic disorder of motile cilia, leading to chronic upper and lower respiratory symptoms. Prevalence is estimated at around 1:10,000, but many patients remain undiagnosed, while others receive the label incorrectly. Proper diagnosis is complicated by the fact that the key symptoms such as wet cough, chronic rhinitis and recurrent upper and lower respiratory infection, are common and nonspecific. There is no single gold standard test to diagnose PCD. Presently, the diagnosis is made by augmenting the medical history and physical examination with in patients with a compatible medical history following a demanding combination of tests including nasal nitric oxide, high- speed video microscopy, transmission electron microscopy, genetics, and ciliary culture. These tests are costly and need sophisticated equipment and experienced staff, restricting use to highly specialised centers. Therefore, it would be desirable to have a screening test for identifying those patients who should undergo detailed diagnostic testing. Three recent studies focused on potential screening tools: one paper assessed the validity of nasal nitric oxide for screening, and two studies developed new symptom-based screening tools. These simple tools are welcome, and hopefully remind physicians whom to refer for definitive testing. However, they have been developed in tertiary care settings, where 10 to 50% of tested patients have PCD. Sensitivity and specificity of the tools are reasonable, but positive and negative predictive values may be poor in primary or secondary care settings. While these studies take an important step forward towards an earlier diagnosis of PCD, more remains to be done before we have tools tailored to different health care settings.
Resumo:
BACKGROUND Survival after diagnosis is a fundamental concern in cancer epidemiology. In resource-rich settings, ambient clinical databases, municipal data and cancer registries make survival estimation in real-world populations relatively straightforward. In resource-poor settings, given the deficiencies in a variety of health-related data systems, it is less clear how well we can determine cancer survival from ambient data. METHODS We addressed this issue in sub-Saharan Africa for Kaposi's sarcoma (KS), a cancer for which incidence has exploded with the HIV epidemic but for which survival in the region may be changing with the recent advent of antiretroviral therapy (ART). From 33 primary care HIV Clinics in Kenya, Uganda, Malawi, Nigeria and Cameroon participating in the International Epidemiologic Databases to Evaluate AIDS (IeDEA) Consortia in 2009-2012, we identified 1328 adults with newly diagnosed KS. Patients were evaluated from KS diagnosis until death, transfer to another facility or database closure. RESULTS Nominally, 22% of patients were estimated to be dead by 2 years, but this estimate was clouded by 45% cumulative lost to follow-up with unknown vital status by 2 years. After adjustment for site and CD4 count, age <30 years and male sex were independently associated with becoming lost. CONCLUSIONS In this community-based sample of patients diagnosed with KS in sub-Saharan Africa, almost half became lost to follow-up by 2 years. This precluded accurate estimation of survival. Until we either generally strengthen data systems or implement cancer-specific enhancements (e.g., tracking of the lost) in the region, insights from cancer epidemiology will be limited.
Resumo:
Objective. To determine the accuracy of the urine protein:creatinine ratio (pr:cr) in predicting 300 mg of protein in 24-hour urine collection in pregnant patients with suspected preeclampsia. ^ Methods. A systematic review was performed. Articles were identified through electronic databases and the relevant citations were hand searching of textbooks and review articles. Included studies evaluated patients for suspected preeclampsia with a 24-hour urine sample and a pr:cr. Only English language articles were included. The studies that had patients with chronic illness such as chronic hypertension, diabetes mellitus or renal impairment were excluded from the review. Two researchers extracted accuracy data for pr:cr relative to a gold standard of 300 mg of protein in 24-hour sample as well as population and study characteristics. The data was analyzed and summarized in tabular and graphical form. ^ Results. Sixteen studies were identified and only three studies met our inclusion criteria with 510 total patients. The studies evaluated different cut-points for positivity of pr:cr from 130 mg/g to 700 mg/g. Sensitivities and specificities for pr:cr of 130mg/g -150 mg/g were 90-93% and 33-65%, respectively; for a pr:cr of 300 mg/g were 81-95% and 52-80%, respectively; for a pr:cr of 600-700mg/g were 85-87% and 96-97%, respectively. ^ Conclusion. The value of a random pr:cr to exclude pre-eclampsia is limited because even low levels of pr:cr (130-150 mg/g) may miss up to 10% of patients with significant proteinuria. A pr:cr of more than 600 mg/g may obviate a 24-hour collection.^
Resumo:
Sexually transmitted infections (STIs) are a major public health problem, and controlling their spread is a priority. According to the World Health Organization (WHO), there are 340 million new cases of treatable STIs among 15–49 year olds that occur yearly around the world (1). Infection with STIs can lead to several complications such as pelvic inflammatory disorder (PID), cervical cancer, infertility, ectopic pregnancy, and even death (1). Additionally, STIs and associated complications are among the top disease types for which healthcare is sought in developing nations (1), and according to the UNAIDS report, there is a strong connection between STIs and the sexual spread of HIV infection (2). In fact, it is estimated that the presence of an untreated STI can increase the likelihood of contracting and spreading HIV by a factor up to 10 (2). In addition, developing countries are poorer in resources and lack inexpensive and precise diagnostic laboratory tests for STIs, thereby exacerbating the problem. Thus, the WHO recommends syndromic management of STIs for delivering care where lab testing is scarce or unattainable (1). This approach utilizes the use of an easy to use algorithm to help healthcare workers recognize symptoms/signs so as to provide treatment for the likely cause of the syndrome. Furthermore, according to the WHO, syndromic management offers instant and legitimate treatment compared to clinical diagnosis, and that it is also more cost-effective for some syndromes over the use of laboratory testing (1). In addition, even though it has been shown that the vaginal discharge syndrome has low specificity for gonorrhea and Chlamydia and can lead to over treatment (1), this is the recommended way to manage STIs in developing nations. Thus, the purpose of this paper is to specifically address the following questions: is syndromic management working to lower the STI burden in developing nations? How effective is it, and should it still be recommended? To answer these questions, a systematic literature review was conducted to evaluate the current effectiveness of syndromic management in developing nations. This review examined published articles over the past 5 years that compared syndromic management to laboratory testing and had published sensitivity, specificity, and positive predicative value data. Focusing mainly on vaginal discharge, urethral discharge, and genital ulcer algorithms, it was seen that though syndromic management is more effective in diagnosing and treating urethral and genial ulcer syndromes in men, there still remains an urgent need to revise the WHO recommendations for managing STIs in developing nations. Current studies have continued to show decreased specificity, sensitivity and positive predicative values for the vaginal discharge syndrome, and high rates of asymptomatic infections and healthcare workers neglecting to follow guidelines limit the usefulness of syndromic management. Furthermore, though advocate d as cost-effective by the WHO, there is a cost incurred from treating uninfected people. Instead of improving this system, it is recommended that better and less expensive point of care and the development of rapid test diagnosis kits be the focus and method of diagnosis and treatment in developing nations for STI management. ^