43 resultados para Online and off-line diagnosis and monitoring methods
Resumo:
OBJECTIVES: To assess health care utilisation for patients co-infected with TB and HIV (TB-HIV), and to develop a weighted health care index (HCI) score based on commonly used interventions and compare it with patient outcome. METHODS: A total of 1061 HIV patients diagnosed with TB in four regions, Central/Northern, Southern and Eastern Europe and Argentina, between January 2004 and December 2006 were enrolled in the TB-HIV study. A weighted HCI score (range 0–5), based on independent prognostic factors identified in multivariable Cox models and the final score, included performance of TB drug susceptibility testing (DST), an initial TB regimen containing a rifamycin, isoniazid and pyrazinamide, and start of combination antiretroviral treatment (cART). RESULTS: The mean HCI score was highest in Central/Northern Europe (3.2, 95%CI 3.1–3.3) and lowest in Eastern Europe (1.6, 95%CI 1.5–1.7). The cumulative probability of death 1 year after TB diagnosis decreased from 39% (95%CI 31–48) among patients with an HCI score of 0, to 9% (95%CI 6–13) among those with a score of ≥4. In an adjusted Cox model, a 1-unit increase in the HCI score was associated with 27% reduced mortality (relative hazard 0.73, 95%CI 0.64–0.84). CONCLUSIONS: Our results suggest that DST, standard anti-tuberculosis treatment and early cART may improve outcome for TB-HIV patients. The proposed HCI score provides a tool for future research and monitoring of the management of TB-HIV patients. The highest HCI score may serve as a benchmark to assess TB-HIV management, encouraging continuous health care improvement.
Resumo:
The aim of this study was to evaluate the reliability of the cardiothoracic ratio (CTR) in postmortem computed tomography (PMCT) and to assess a CTR threshold for the diagnosis of cardiomegaly based on the weight of the heart at autopsy. PMCT data of 170 deceased human adults were retrospectively evaluated by two blinded radiologists. The CTR was measured on axial computed tomography images and the actual cardiac weight was weighed at autopsy. Inter-rater reliability, sensitivity, and specificity were calculated. Receiver operating characteristic curves were calculated to assess enlarged heart weight by CTR. The autopsy definition of cardiomegaly was based on normal values of the Zeek method (within a range of both, one or two SD) and the Smith method (within the given range). Intra-class correlation coefficients demonstrated excellent agreements (0.983) regarding CTR measurements. In 105/170 (62 %) cases the CTR in PMCT was >0.5, indicating enlarged heart weight, according to clinical references. The mean heart weight measured in autopsy was 405 ± 105 g. As a result, 114/170 (67 %) cases were interpreted as having enlarged heart weights according to the normal values of Zeek within one SD, while 97/170 (57 %) were within two SD. 100/170 (59 %) were assessed as enlarged according to Smith's normal values. The sensitivity/specificity of the 0.5 cut-off of the CTR for the diagnosis of enlarged heart weight was 78/71 % (Zeek one SD), 74/55 % (Zeek two SD), and 76/59 % (Smith), respectively. The discriminative power between normal heart weight and cardiomegaly was 79, 73, and 74 % for the Zeek (1SD/2SD) and Smith methods respectively. Changing the CTR threshold to 0.57 resulted in a minimum specificity of 95 % for all three definitions of cardiomegaly. With a CTR threshold of 0.57, cardiomegaly can be identified with a very high specificity. This may be useful if PMCT is used by forensic pathologists as a screening tool for medico-legal autopsies.
Resumo:
BACKGROUND AND AIMS The structured IBD Ahead 'Optimised Monitoring' programme was designed to obtain the opinion, insight and advice of gastroenterologists on optimising the monitoring of Crohn's disease activity in four settings: (1) assessment at diagnosis, (2) monitoring in symptomatic patients, (3) monitoring in asymptomatic patients, and (4) the postoperative follow-up. For each of these settings, four monitoring methods were discussed: (a) symptom assessment, (b) endoscopy, (c) laboratory markers, and (d) imaging. Based on literature search and expert opinion compiled during an international consensus meeting, recommendations were given to answer the question 'which diagnostic method, when, and how often'. The International IBD Ahead Expert Panel advised to tailor this guidance to the healthcare system and the special prerequisites of each country. The IBD Ahead Swiss National Steering Committee proposes best-practice recommendations adapted for Switzerland. METHODS The IBD Ahead Steering Committee identified key questions and provided the Swiss Expert Panel with a structured literature research. The expert panel agreed on a set of statements. During an international expert meeting the consolidated outcome of the national meetings was merged into final statements agreed by the participating International and National Steering Committee members - the IBD Ahead 'Optimized Monitoring' Consensus. RESULTS A systematic assessment of symptoms, endoscopy findings, and laboratory markers with special emphasis on faecal calprotectin is deemed necessary even in symptom-free patients. The choice of recommended imaging methods is adapted to the specific situation in Switzerland and highlights the importance of ultrasonography and magnetic resonance imaging besides endoscopy. CONCLUSION The recommendations stress the importance of monitoring disease activity on a regular basis and by objective parameters, such as faecal calprotectin and endoscopy with detailed documentation of findings. Physicians should not rely on symptoms only and adapt the monitoring schedule and choice of options to individual situations.
Resumo:
OBJECTIVES To compare longitudinal patterns of health care utilization and quality of care for other health conditions between breast cancer-surviving older women and a matched cohort without breast cancer. DESIGN Prospective five-year longitudinal comparison of cases and matched controls. SUBJECTS Newly identified breast cancer patients recruited during 1997–1999 from four geographic regions (Los Angeles, CA; Minnesota; North Carolina; and Rhode Island; N = 422) were matched by age, race, baseline comorbidity and zip code location with up to four non-breast-cancer controls (N = 1,656). OUTCOMES Survival; numbers of hospitalized days and physician visits; total inpatient and outpatient Medicare payments; guideline monitoring for patients with cardiovascular disease and diabetes, and bone density testing and colorectal cancer screening. RESULTS Five-year survival was similar for cases and controls (80% and 82%, respectively; p = 0.18). In the first follow-up year, comorbidity burden and health care utilization were higher for cases (p < 0.01), with most differences diminishing over time. However, the number of physician visits was higher for cases (p < 0.01) in every year, driven partly by more cancer and surgical specialist visits. Cases and controls adhered similarly to recommended bone density testing, and monitoring of cardiovascular disease and diabetes; adherence to recommended colorectal cancer screening was better among cases. CONCLUSION Breast cancer survivors’ health care utilization and disease burden return to pre-diagnosis levels after one year, yet their greater use of outpatient care persists at least five years. Quality of care for other chronic health problems is similar for cases and controls.
Resumo:
Background Vasopressin is one of the most important physiological stress and shock hormones. Copeptin, a stable vasopressin precursor, is a promising sepsis marker in adults. In contrast, its involvement in neonatal diseases remains unknown. The aim of this study was to establish copeptin concentrations in neonates of different stress states such as sepsis, chorioamnionitis and asphyxia. Methods Copeptin cord blood concentration was determined using the BRAHMS kryptor assay. Neonates with early-onset sepsis (EOS, n = 30), chorioamnionitis (n = 33) and asphyxia (n = 25) were compared to a control group of preterm and term (n = 155) neonates. Results Median copeptin concentration in cord blood was 36 pmol/l ranging from undetectable to 5498 pmol/l (IQR 7 - 419). Copeptin cord blood concentrations were non-normally distributed and increased with gestational age (p < 0.0001). Neonates born after vaginal compared to cesarean delivery had elevated copeptin levels (p < 0.0001). Copeptin correlated strongly with umbilical artery pH (Spearman's Rho -0.50, p < 0.0001), umbilical artery base excess (Rho -0.67, p < 0.0001) and with lactate at NICU admission (Rho 0.54, p < 0.0001). No difference was found when comparing copeptin cord blood concentrations between neonates with EOS and controls (multivariate p = 0.30). The highest copeptin concentrations were found in neonates with asphyxia (median 993 pmol/l). Receiver-operating-characteristic curve analysis showed that copeptin cord blood concentrations were strongly associated with asphyxia: the area under the curve resulted at 0.91 (95%-CI 0.87-0.96, p < 0.0001). A cut-off of 400 pmol/l had a sensitivity of 92% and a specifity of 82% for asphyxia as defined in this study. Conclusions Copeptin concentrations were strongly related to factors associated with perinatal stress such as birth acidosis, asphyxia and vaginal delivery. In contrast, copeptin appears to be unsuitable for the diagnosis of EOS.
Resumo:
The clinical manifestations of anti-cancer drug associated cardiac side effects are diverse and can range from acutely induced cardiac arrhythmias to Q-T interval prolongation, changes in coronary vasomotion with consecutive myocardial ischemia, myocarditis, pericarditis, severe contractile dysfunction, and potentially fatal heart failure. The pathophysiology of these adverse effects is similarly heterogeneous and the identification of potential mechanisms is frequently difficult since the majority of cancer patients is not only treated with a multitude of cancer drugs but might also be exposed to potentially cardiotoxic radiation therapy. Some of the targets inhibited by new anti-cancer drugs also appear to be important for the maintenance of cellular homeostasis of normal tissue, in particular during exposure to cytotoxic chemotherapy. If acute chemotherapy-induced myocardial damage is only moderate, the process of myocardial remodeling can lead to progressive myocardial dysfunction over years and eventually induce myocardial dysfunction and heart failure. The tools for diagnosing anti-cancer drug associated cardiotoxicity and monitoring patients during chemotherapy include invasive and noninvasive techniques as well as laboratory investigations and are mostly only validated for anthracycline-induced cardiotoxicity and more recently for trastuzumab-associated cardiac dysfunction.
Resumo:
Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field. Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms. Recent reviews have described the range of assays that have been used for this purpose.(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi). Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes. This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response.
Resumo:
BACKGROUND: The provision of highly active antiretroviral therapy (HAART) in resource-limited settings follows a public health approach, which is characterised by a limited number of regimens and the standardisation of clinical and laboratory monitoring. In industrialized countries doctors prescribe from the full range of available antiretroviral drugs, supported by resistance testing and frequent laboratory monitoring. We compared virologic response, changes to first-line regimens, and mortality in HIV-infected patients starting HAART in South Africa and Switzerland. METHODS AND FINDINGS: We analysed data from the Swiss HIV Cohort Study and two HAART programmes in townships of Cape Town, South Africa. We included treatment-naïve patients aged 16 y or older who had started treatment with at least three drugs since 2001, and excluded intravenous drug users. Data from a total of 2,348 patients from South Africa and 1,016 patients from the Swiss HIV Cohort Study were analysed. Median baseline CD4+ T cell counts were 80 cells/mul in South Africa and 204 cells/mul in Switzerland. In South Africa, patients started with one of four first-line regimens, which was subsequently changed in 514 patients (22%). In Switzerland, 36 first-line regimens were used initially, and these were changed in 539 patients (53%). In most patients HIV-1 RNA was suppressed to 500 copies/ml or less within one year: 96% (95% confidence interval [CI] 95%-97%) in South Africa and 96% (94%-97%) in Switzerland, and 26% (22%-29%) and 27% (24%-31%), respectively, developed viral rebound within two years. Mortality was higher in South Africa than in Switzerland during the first months of HAART: adjusted hazard ratios were 5.90 (95% CI 1.81-19.2) during months 1-3 and 1.77 (0.90-3.50) during months 4-24. CONCLUSIONS: Compared to the highly individualised approach in Switzerland, programmatic HAART in South Africa resulted in similar virologic outcomes, with relatively few changes to initial regimens. Further innovation and resources are required in South Africa to both achieve more timely access to HAART and improve the prognosis of patients who start HAART with advanced disease.
Resumo:
OBJECTIVES It is still debated if pre-existing minority drug-resistant HIV-1 variants (MVs) affect the virological outcomes of first-line NNRTI-containing ART. METHODS This Europe-wide case-control study included ART-naive subjects infected with drug-susceptible HIV-1 as revealed by population sequencing, who achieved virological suppression on first-line ART including one NNRTI. Cases experienced virological failure and controls were subjects from the same cohort whose viraemia remained suppressed at a matched time since initiation of ART. Blinded, centralized 454 pyrosequencing with parallel bioinformatic analysis in two laboratories was used to identify MVs in the 1%-25% frequency range. ORs of virological failure according to MV detection were estimated by logistic regression. RESULTS Two hundred and sixty samples (76 cases and 184 controls), mostly subtype B (73.5%), were used for the analysis. Identical MVs were detected in the two laboratories. 31.6% of cases and 16.8% of controls harboured pre-existing MVs. Detection of at least one MV versus no MVs was associated with an increased risk of virological failure (OR = 2.75, 95% CI = 1.35-5.60, P = 0.005); similar associations were observed for at least one MV versus no NRTI MVs (OR = 2.27, 95% CI = 0.76-6.77, P = 0.140) and at least one MV versus no NNRTI MVs (OR = 2.41, 95% CI = 1.12-5.18, P = 0.024). A dose-effect relationship between virological failure and mutational load was found. CONCLUSIONS Pre-existing MVs more than double the risk of virological failure to first-line NNRTI-based ART.
Resumo:
BACKGROUND In 2012, the levels of chlamydia control activities including primary prevention, effective case management with partner management and surveillance were assessed in 2012 across countries in the European Union and European Economic Area (EU/EEA), on initiative of the European Centre for Disease Control (ECDC) survey, and the findings were compared with those from a similar survey in 2007. METHODS Experts in the 30 EU/EEA countries were invited to respond to an online questionnaire; 28 countries responded, of which 25 participated in both the 2007 and 2012 surveys. Analyses focused on 13 indicators of chlamydia prevention and control activities; countries were assigned to one of five categories of chlamydia control. RESULTS In 2012, more countries than in 2007 reported availability of national chlamydia case management guidelines (80% vs. 68%), opportunistic chlamydia testing (68% vs. 44%) and consistent use of nucleic acid amplification tests (64% vs. 36%). The number of countries reporting having a national sexually transmitted infection control strategy or a surveillance system for chlamydia did not change notably. In 2012, most countries (18/25, 72%) had implemented primary prevention activities and case management guidelines addressing partner management, compared with 44% (11/25) of countries in 2007. CONCLUSION Overall, chlamydia control activities in EU/EEA countries strengthened between 2007 and 2012. Several countries still need to develop essential chlamydia control activities, whereas others may strengthen implementation and monitoring of existing activities.