905 resultados para Observational


Relevância:

20.00% 20.00%

Publicador:

Resumo:

: Noncommunicable diseases (NCDs) account for a growing burden of morbidity and mortality among people living with HIV in low- and middle-income countries (LMICs). HIV infection and antiretroviral therapy interact with NCD risk factors in complex ways, and research into this "web of causation" has so far been largely based on data from high-income countries. However, improving the understanding, treatment, and prevention of NCDs in LMICs requires region-specific evidence. Priority research areas include: (1) defining the burden of NCDs among people living with HIV, (2) understanding the impact of modifiable risk factors, (3) evaluating effective and efficient care strategies at individual and health systems levels, and (4) evaluating cost-effective prevention strategies. Meeting these needs will require observational data, both to inform the design of randomized trials and to replace trials that would be unethical or infeasible. Focusing on Sub-Saharan Africa, we discuss data resources currently available to inform this effort and consider key limitations and methodological challenges. Existing data resources often lack population-based samples; HIV-negative, HIV-positive, and antiretroviral therapy-naive comparison groups; and measurements of key NCD risk factors and outcomes. Other challenges include loss to follow-up, competing risk of death, incomplete outcome ascertainment and measurement of factors affecting clinical decision making, and the need to control for (time-dependent) confounding. We review these challenges and discuss strategies for overcoming them through augmented data collection and appropriate analysis. We conclude with recommendations to improve the quality of data and analyses available to inform the response to HIV and NCD comorbidity in LMICs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND International travel contributes to the worldwide spread of multidrug resistant Gram-negative bacteria. Rates of travel-related faecal colonization with extended-spectrum β-lactamase (ESBL)-producing Enterobacteriaceae vary for different destinations. Especially travellers returning from the Indian subcontinent show high colonization rates. So far, nothing is known about region-specific risk factors for becoming colonized. METHODS An observational prospective multicentre cohort study investigated travellers to South Asia. Before and after travelling, rectal swabs were screened for third-generation cephalosporin- and carbapenem-resistant Enterobacteriaceae. Participants completed questionnaires to identify risk factors for becoming colonized. Covariates were assessed univariately, followed by a multivariate regression. RESULTS Hundred and seventy persons were enrolled, the largest data set on travellers to the Indian subcontinent so far. The acquired colonization rate with ESBL-producing Escherichia coli overall was 69.4% (95% CI 62.1-75.9%), being highest in travellers returning from India (86.8%; 95% CI 78.5-95.0%) and lowest in travellers returning from Sri Lanka (34.7%; 95% CI 22.9-48.7%). Associated risk factors were travel destination, length of stay, visiting friends and relatives, and eating ice cream and pastry. CONCLUSIONS High colonization rates with ESBL-producing Enterobacteriaceae were found in travellers returning from South Asia. Though risk factors were identified, a more common source, i.e. environmental, appears to better explain the high colonization rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE This retrospective observational pilot study examined differences in peri-implant bone level changes (ΔIBL) between two similar implant types differing only in the surface texture of the neck. The hypothesis tested was that ΔIBL would be greater with machined-neck implants than with groovedneck implants. METHOD AND MATERIALS 40 patients were enrolled; n = 20 implants with machined (group 1) and n = 20 implants with a rough, grooved neck (group 2), all placed in the posterior mandible. Radiographs were obtained after loading (at 3 to 9 months) and at 12 to 18 months after implant insertion. Case number calculation with respect to ΔIBL was conducted. Groups were compared using a Brunner-Langer model, the Mann-Whitney test, the Wilcoxon signed rank test, and linear model analysis. RESULTS After the 12- to 18-month observation period, mean ΔIBL was -1.11 ± 0.92 mm in group 1 and -1.25 ± 1.23 mm in group 2. ΔIBL depended significantly on time (P < .001), but not on group. In both groups, mean marginal ΔIBL was significantly less than -1.5 mm. Only insertion depth had a significant influence on the amount of periimplant bone loss (P = .013). Case number estimate testing for a difference between group 1 and 2 with a power of 90% revealed a sample size per group of 1,032 subjects. CONCLUSION ΔIBL values indicated that both implant designs fulfilled implant success criteria, and the modification of implant neck texture had no significant influence on ΔIBL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BackgroundAcute cough is a common problem in general practice and is often caused by a self-limiting, viral infection. Nonetheless, antibiotics are often prescribed in this situation, which may lead to unnecessary side effects and, even worse, the development of antibiotic resistant microorganisms worldwide. This study assessed the role of point-of-care C-reactive protein (CRP) testing and other predictors of antibiotic prescription in patients who present with acute cough in general practice.MethodsPatient characteristics, symptoms, signs, and laboratory and X-ray findings from 348 patients presenting to 39 general practitioners with acute cough, as well as the GPs themselves, were recorded by fourth-year medical students during their three-week clerkships in general practice. Patient and clinician characteristics of those prescribed and not-prescribed antibiotics were compared using a mixed-effects model.ResultsOf 315 patients included in the study, 22% were prescribed antibiotics. The two groups of patients, those prescribed antibiotics and those treated symptomatically, differed significantly in age, demand for antibiotics, days of cough, rhinitis, lung auscultation, haemoglobin level, white blood cell count, CRP level and the GP¿s license to self-dispense antibiotics. After regression analysis, only the CRP level, the white blood cell count and the duration of the symptoms were statistically significant predictors of antibiotic prescription.ConclusionsThe antibiotic prescription rate of 22% in adult patients with acute cough in the Swiss primary care setting is low compared to other countries. GPs appear to use point-of-care CRP testing in addition to the duration of clinical symptoms to help them decide whether or not to prescribe antibiotics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To evaluate the initiation of and response to tumor necrosis factor (TNF) inhibitors for axial spondyloarthritis (axSpA) in private rheumatology practices versus academic centers. METHODS We compared newly initiated TNF inhibition for axSpA in 363 patients enrolled in private practices with 100 patients recruited in 6 university hospitals within the Swiss Clinical Quality Management (SCQM) cohort. RESULTS All patients had been treated with ≥ 1 nonsteroidal antiinflammatory drug and > 70% of patients had a baseline Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) ≥ 4 before anti-TNF agent initiation. The proportion of patients with nonradiographic axSpA (nr-axSpA) treated with TNF inhibitors was higher in hospitals versus private practices (30.4% vs 18.7%, p = 0.02). The burden of disease as assessed by patient-reported outcomes at baseline was slightly higher in the hospital setting. Mean levels (± SD) of the Ankylosing Spondylitis Disease Activity Score were, however, virtually identical in private practices and academic centers (3.4 ± 1.0 vs 3.4 ± 0.9, p = 0.68). An Assessment of SpondyloArthritis international Society (ASAS40) response at 1 year was reached for ankylosing spondylitis in 51.7% in private practices and 52.9% in university hospitals (p = 1.0) and for nr-axSpA in 27.5% versus 25.0%, respectively (p = 1.0). CONCLUSION With the exception of a lower proportion of patients with nr-axSpA newly treated with anti-TNF agents in private practices in comparison to academic centers, adherence to ASAS treatment recommendations for TNF inhibition was equally high, and similar response rates to TNF blockers were achieved in both clinical settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE The primary aim of the study was to evaluate whether rheumatoid arthritis (RA) patients considered to be in remission according to clinical criteria sets still had persisting ultrasound (US) synovitis. We further intended to evaluate the capacity of our US score to discriminate between the patients with a clinically active disease versus those in remission. METHODS This is an observational study nested within the Swiss Clinical Quality Management in Rheumatic Diseases (SCQM) rheumatoid arthritis cohort. A validated US score (SONAR score) based on a semi-quantitative B-mode and Doppler (PwD) score as part of the regular clinical workup by rheumatologists in different clinical settings was used. To define clinically relevant synovitis, the same score was applied to 38 healthy controls and the 90st percentile was used as cut-off for 'relevant' synovitis. RESULTS Three hundred and seven patients had at least one US examination and concomitant clinical information on disease activity. More than a third of patients in both DAS28 and ACR/EULAR remission showed significant gray scale synovitis (P=0.01 and 0.0002, respectively) and PwD activity (P=0.005 and 0.0005, respectively) when compared to controls. The capacity of US to discriminate between the two clinical remission groups and patients with active disease was only moderate. CONCLUSION This observational study confirms that many patients considered to be in clinical remission according the DAS and the ACR/EULAR definitions still have residual synovitis on US. The prognostic significance of US synovitis and the exact place of US in patients reaching clinical remission need to be further evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To evaluate the correlation between clinical measures of disease activity and a ultrasound (US) scoring system for synovitis applied by many different ultrasonographers in a daily routine care setting within the Swiss registry for RA (SCQM) and further to determine the sensitivity to change of this US Score. METHODS One hundred and eight Swiss rheumatologists were trained in performing the Swiss Sonography in Arthritis and Rheumatism (SONAR) score. US B-mode and Power Doppler (PwD) scores were correlated with DAS28 and compared between the clinical categories in a cross-sectional cohort of patients. In patients with a second US (longitudinal cohort), we investigated if change in US score correlated with change in DAS and evaluated the responsiveness of both methods. RESULTS In the cross-sectional cohort with 536 patients, correlation between the B-mode score and DAS28 was significant but modest (Pearson coefficient r = 0.41, P < 0.0001). The same was true for the PwD score (r = 0.41, P < 0.0001). In the longitudinal cohort with 183 patients we also found a significant correlation between change in B-mode and in PwD score with change in DAS28 (r = 0.54, P < 0.0001 and r = 0.46, P < 0.0001, respectively). Both methods of evaluation (DAS and US) showed similar responsiveness according to standardized response mean (SRM). CONCLUSIONS The SONAR Score is practicable and was applied by many rheumatologists in daily routine care after initial training. It demonstrates significant correlations with the degree of as well as change in disease activity as measured by DAS. On the level of the individual, the US score shows many discrepancies and overlapping results exist.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Sepsis continues to be a major cause of death, disability, and health-care expenditure worldwide. Despite evidence suggesting that host genetics can influence sepsis outcomes, no specific loci have yet been convincingly replicated. The aim of this study was to identify genetic variants that influence sepsis survival. METHODS We did a genome-wide association study in three independent cohorts of white adult patients admitted to intensive care units with sepsis, severe sepsis, or septic shock (as defined by the International Consensus Criteria) due to pneumonia or intra-abdominal infection (cohorts 1-3, n=2534 patients). The primary outcome was 28 day survival. Results for the cohort of patients with sepsis due to pneumonia were combined in a meta-analysis of 1553 patients from all three cohorts, of whom 359 died within 28 days of admission to the intensive-care unit. The most significantly associated single nucleotide polymorphisms (SNPs) were genotyped in a further 538 white patients with sepsis due to pneumonia (cohort 4), of whom 106 died. FINDINGS In the genome-wide meta-analysis of three independent pneumonia cohorts (cohorts 1-3), common variants in the FER gene were strongly associated with survival (p=9·7 × 10(-8)). Further genotyping of the top associated SNP (rs4957796) in the additional cohort (cohort 4) resulted in a combined p value of 5·6 × 10(-8) (odds ratio 0·56, 95% CI 0·45-0·69). In a time-to-event analysis, each allele reduced the mortality over 28 days by 44% (hazard ratio for death 0·56, 95% CI 0·45-0·69; likelihood ratio test p=3·4 × 10(-9), after adjustment for age and stratification by cohort). Mortality was 9·5% in patients carrying the CC genotype, 15·2% in those carrying the TC genotype, and 25·3% in those carrying the TT genotype. No significant genetic associations were identified when patients with sepsis due to pneumonia and intra-abdominal infection were combined. INTERPRETATION We have identified common variants in the FER gene that associate with a reduced risk of death from sepsis due to pneumonia. The FER gene and associated molecular pathways are potential novel targets for therapy or prevention and candidates for the development of biomarkers for risk stratification. FUNDING European Commission and the Wellcome Trust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND General practitioners (GPs) are in best position to suspect dementia. Mini-Mental State Examination (MMSE) and Clock Drawing Test (CDT) are widely used. Additional neurological tests may increase the accuracy of diagnosis. We aimed to evaluate diagnostic ability to detect dementia with a Short Smell Test (SST) and Palmo-Mental Reflex (PMR) in patients whose MMSE and CDT are normal, but who show signs of cognitive dysfunction. METHODS This was a 3.5-year cross-sectional observational study in the Memory Clinic of the University Department of Geriatrics in Bern, Switzerland. Participating patients with normal MMSE (>26 points) and CDT (>5 points) were referred by GPs because they suspected dementia. All were examined according to a standardized protocol. Diagnosis of dementia was based on DSM-IV TR criteria. We used SST and PMR to determine if they accurately detected dementia. RESULTS In our cohort, 154 patients suspected of dementia had normal MMSE and CDT test results. Of these, 17 (11 %) were demented. If SST or PMR were abnormal, sensitivity was 71 % (95 % CI 44-90 %), and specificity 64 % (95 % CI 55-72 %) for detecting dementia. If both tests were abnormal, sensitivity was 24 % (95 % CI 7-50 %), but specificity increased to 93 % (95 % CI 88-97 %). CONCLUSION Patients suspected of dementia, but with normal MMSE and CDT results, may benefit if SST and PMR are added as diagnostic tools. If both SST and PMR are abnormal, this is a red flag to investigate these patients further, even though their negative neuropsychological screening results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES The aim of this study was to quantify loss to follow-up (LTFU) in HIV care after delivery and to identify risk factors for LTFU, and implications for HIV disease progression and subsequent pregnancies. METHODS We used data on pregnancies within the Swiss HIV Cohort Study from 1996 to 2011. A delayed clinical visit was defined as > 180 days and LTFU as no visit for > 365 days after delivery. Logistic regression analysis was used to identify risk factors for LTFU. RESULTS A total of 695 pregnancies in 580 women were included in the study, of which 115 (17%) were subsequent pregnancies. Median maternal age was 32 years (IQR 28-36 years) and 104 (15%) women reported any history of injecting drug use (IDU). Overall, 233 of 695 (34%) women had a delayed visit in the year after delivery and 84 (12%) women were lost to follow-up. Being lost to follow-up was significantly associated with a history of IDU [adjusted odds ratio (aOR) 2.79; 95% confidence interval (CI) 1.32-5.88; P = 0.007] and not achieving an undetectable HIV viral load (VL) at delivery (aOR 2.42; 95% CI 1.21-4.85; P = 0.017) after adjusting for maternal age, ethnicity and being on antiretroviral therapy (ART) at conception. Forty-three of 84 (55%) women returned to care after LTFU. Half of them (20 of 41) with available CD4 had a CD4 count < 350 cells/μL and 15% (six of 41) a CD4 count < 200 cells/μL at their return. CONCLUSIONS A history of IDU and detectable HIV VL at delivery were associated with LTFU. Effective strategies are warranted to retain women in care beyond pregnancy and to avoid CD4 cell count decline. ART continuation should be advised especially if a subsequent pregnancy is planned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Chronic postsurgical pain (CPSP) is an important clinical problem. Prospective studies of the incidence, characteristics and risk factors of CPSP are needed. OBJECTIVES The objective of this study is to evaluate the incidence and risk factors of CPSP. DESIGN A multicentre, prospective, observational trial. SETTING Twenty-one hospitals in 11 European countries. PATIENTS Three thousand one hundred and twenty patients undergoing surgery and enrolled in the European registry PAIN OUT. MAIN OUTCOME MEASURES Pain-related outcome was evaluated on the first postoperative day (D1) using a standardised pain outcome questionnaire. Review at 6 and 12 months via e-mail or telephonic interview used the Brief Pain Inventory (BPI) and the DN4 (Douleur Neuropathique four questions). Primary endpoint was the incidence of moderate to severe CPSP (numeric rating scale, NRS ≥3/10) at 12 months. RESULTS For 1044 and 889 patients, complete data were available at 6 and 12 months. At 12 months, the incidence of moderate to severe CPSP was 11.8% (95% CI 9.7 to 13.9) and of severe pain (NRS ≥6) 2.2% (95% CI 1.2 to 3.3). Signs of neuropathic pain were recorded in 35.4% (95% CI 23.9 to 48.3) and 57.1% (95% CI 30.7 to 83.4) of patients with moderate and severe CPSP, respectively. Functional impairment (BPI) at 6 and 12 months increased with the severity of CPSP (P < 0.01) and presence of neuropathic characteristics (P < 0.001). Multivariate analysis identified orthopaedic surgery, preoperative chronic pain and percentage of time in severe pain on D1 as risk factors. A 10% increase in percentage of time in severe pain was associated with a 30% increase of CPSP incidence at 12 months. CONCLUSION The collection of data on CPSP was feasible within the European registry PAIN OUT. The incidence of moderate to severe CPSP at 12 months was 11.8%. Functional impairment was associated with CPSP severity and neuropathic characteristics. Risk factors for CPSP in the present study were chronic preoperative pain, orthopaedic surgery and percentage of time in severe pain on D1. TRIAL REGISTRATION Clinicaltrials.gov identifier: NCT01467102.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Recommendations have differed nationally and internationally with respect to the best time to start antiretroviral therapy (ART). We compared effectiveness of three strategies for initiation of ART in high-income countries for HIV-positive individuals who do not have AIDS: immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL. METHODS We used data from the HIV-CAUSAL Collaboration of cohort studies in Europe and the USA. We included 55 826 individuals aged 18 years or older who were diagnosed with HIV-1 infection between January, 2000, and September, 2013, had not started ART, did not have AIDS, and had CD4 count and HIV-RNA viral load measurements within 6 months of HIV diagnosis. We estimated relative risks of death and of death or AIDS-defining illness, mean survival time, the proportion of individuals in need of ART, and the proportion of individuals with HIV-RNA viral load less than 50 copies per mL, as would have been recorded under each ART initiation strategy after 7 years of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders. FINDINGS Median CD4 count at diagnosis of HIV infection was 376 cells per μL (IQR 222-551). Compared with immediate initiation, the estimated relative risk of death was 1·02 (95% CI 1·01-1·02) when ART was started at a CD4 count less than 500 cells per μL, and 1·06 (1·04-1·08) with initiation at a CD4 count less than 350 cells per μL. Corresponding estimates for death or AIDS-defining illness were 1·06 (1·06-1·07) and 1·20 (1·17-1·23), respectively. Compared with immediate initiation, the mean survival time at 7 years with a strategy of initiation at a CD4 count less than 500 cells per μL was 2 days shorter (95% CI 1-2) and at a CD4 count less than 350 cells per μL was 5 days shorter (4-6). 7 years after diagnosis of HIV, 100%, 98·7% (95% CI 98·6-98·7), and 92·6% (92·2-92·9) of individuals would have been in need of ART with immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL, respectively. Corresponding proportions of individuals with HIV-RNA viral load less than 50 copies per mL at 7 years were 87·3% (87·3-88·6), 87·4% (87·4-88·6), and 83·8% (83·6-84·9). INTERPRETATION The benefits of immediate initiation of ART, such as prolonged survival and AIDS-free survival and increased virological suppression, were small in this high-income setting with relatively low CD4 count at HIV diagnosis. The estimated beneficial effect on AIDS is less than in recently reported randomised trials. Increasing rates of HIV testing might be as important as a policy of early initiation of ART. FUNDING National Institutes of Health.