126 resultados para Observational recording
Resumo:
BackgroundAcute cough is a common problem in general practice and is often caused by a self-limiting, viral infection. Nonetheless, antibiotics are often prescribed in this situation, which may lead to unnecessary side effects and, even worse, the development of antibiotic resistant microorganisms worldwide. This study assessed the role of point-of-care C-reactive protein (CRP) testing and other predictors of antibiotic prescription in patients who present with acute cough in general practice.MethodsPatient characteristics, symptoms, signs, and laboratory and X-ray findings from 348 patients presenting to 39 general practitioners with acute cough, as well as the GPs themselves, were recorded by fourth-year medical students during their three-week clerkships in general practice. Patient and clinician characteristics of those prescribed and not-prescribed antibiotics were compared using a mixed-effects model.ResultsOf 315 patients included in the study, 22% were prescribed antibiotics. The two groups of patients, those prescribed antibiotics and those treated symptomatically, differed significantly in age, demand for antibiotics, days of cough, rhinitis, lung auscultation, haemoglobin level, white blood cell count, CRP level and the GP¿s license to self-dispense antibiotics. After regression analysis, only the CRP level, the white blood cell count and the duration of the symptoms were statistically significant predictors of antibiotic prescription.ConclusionsThe antibiotic prescription rate of 22% in adult patients with acute cough in the Swiss primary care setting is low compared to other countries. GPs appear to use point-of-care CRP testing in addition to the duration of clinical symptoms to help them decide whether or not to prescribe antibiotics.
Resumo:
OBJECTIVE To evaluate the initiation of and response to tumor necrosis factor (TNF) inhibitors for axial spondyloarthritis (axSpA) in private rheumatology practices versus academic centers. METHODS We compared newly initiated TNF inhibition for axSpA in 363 patients enrolled in private practices with 100 patients recruited in 6 university hospitals within the Swiss Clinical Quality Management (SCQM) cohort. RESULTS All patients had been treated with ≥ 1 nonsteroidal antiinflammatory drug and > 70% of patients had a baseline Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) ≥ 4 before anti-TNF agent initiation. The proportion of patients with nonradiographic axSpA (nr-axSpA) treated with TNF inhibitors was higher in hospitals versus private practices (30.4% vs 18.7%, p = 0.02). The burden of disease as assessed by patient-reported outcomes at baseline was slightly higher in the hospital setting. Mean levels (± SD) of the Ankylosing Spondylitis Disease Activity Score were, however, virtually identical in private practices and academic centers (3.4 ± 1.0 vs 3.4 ± 0.9, p = 0.68). An Assessment of SpondyloArthritis international Society (ASAS40) response at 1 year was reached for ankylosing spondylitis in 51.7% in private practices and 52.9% in university hospitals (p = 1.0) and for nr-axSpA in 27.5% versus 25.0%, respectively (p = 1.0). CONCLUSION With the exception of a lower proportion of patients with nr-axSpA newly treated with anti-TNF agents in private practices in comparison to academic centers, adherence to ASAS treatment recommendations for TNF inhibition was equally high, and similar response rates to TNF blockers were achieved in both clinical settings.
Resumo:
OBJECTIVE The primary aim of the study was to evaluate whether rheumatoid arthritis (RA) patients considered to be in remission according to clinical criteria sets still had persisting ultrasound (US) synovitis. We further intended to evaluate the capacity of our US score to discriminate between the patients with a clinically active disease versus those in remission. METHODS This is an observational study nested within the Swiss Clinical Quality Management in Rheumatic Diseases (SCQM) rheumatoid arthritis cohort. A validated US score (SONAR score) based on a semi-quantitative B-mode and Doppler (PwD) score as part of the regular clinical workup by rheumatologists in different clinical settings was used. To define clinically relevant synovitis, the same score was applied to 38 healthy controls and the 90st percentile was used as cut-off for 'relevant' synovitis. RESULTS Three hundred and seven patients had at least one US examination and concomitant clinical information on disease activity. More than a third of patients in both DAS28 and ACR/EULAR remission showed significant gray scale synovitis (P=0.01 and 0.0002, respectively) and PwD activity (P=0.005 and 0.0005, respectively) when compared to controls. The capacity of US to discriminate between the two clinical remission groups and patients with active disease was only moderate. CONCLUSION This observational study confirms that many patients considered to be in clinical remission according the DAS and the ACR/EULAR definitions still have residual synovitis on US. The prognostic significance of US synovitis and the exact place of US in patients reaching clinical remission need to be further evaluated.
Resumo:
OBJECTIVE To evaluate the correlation between clinical measures of disease activity and a ultrasound (US) scoring system for synovitis applied by many different ultrasonographers in a daily routine care setting within the Swiss registry for RA (SCQM) and further to determine the sensitivity to change of this US Score. METHODS One hundred and eight Swiss rheumatologists were trained in performing the Swiss Sonography in Arthritis and Rheumatism (SONAR) score. US B-mode and Power Doppler (PwD) scores were correlated with DAS28 and compared between the clinical categories in a cross-sectional cohort of patients. In patients with a second US (longitudinal cohort), we investigated if change in US score correlated with change in DAS and evaluated the responsiveness of both methods. RESULTS In the cross-sectional cohort with 536 patients, correlation between the B-mode score and DAS28 was significant but modest (Pearson coefficient r = 0.41, P < 0.0001). The same was true for the PwD score (r = 0.41, P < 0.0001). In the longitudinal cohort with 183 patients we also found a significant correlation between change in B-mode and in PwD score with change in DAS28 (r = 0.54, P < 0.0001 and r = 0.46, P < 0.0001, respectively). Both methods of evaluation (DAS and US) showed similar responsiveness according to standardized response mean (SRM). CONCLUSIONS The SONAR Score is practicable and was applied by many rheumatologists in daily routine care after initial training. It demonstrates significant correlations with the degree of as well as change in disease activity as measured by DAS. On the level of the individual, the US score shows many discrepancies and overlapping results exist.
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
BACKGROUND After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.
Resumo:
BACKGROUND Sepsis continues to be a major cause of death, disability, and health-care expenditure worldwide. Despite evidence suggesting that host genetics can influence sepsis outcomes, no specific loci have yet been convincingly replicated. The aim of this study was to identify genetic variants that influence sepsis survival. METHODS We did a genome-wide association study in three independent cohorts of white adult patients admitted to intensive care units with sepsis, severe sepsis, or septic shock (as defined by the International Consensus Criteria) due to pneumonia or intra-abdominal infection (cohorts 1-3, n=2534 patients). The primary outcome was 28 day survival. Results for the cohort of patients with sepsis due to pneumonia were combined in a meta-analysis of 1553 patients from all three cohorts, of whom 359 died within 28 days of admission to the intensive-care unit. The most significantly associated single nucleotide polymorphisms (SNPs) were genotyped in a further 538 white patients with sepsis due to pneumonia (cohort 4), of whom 106 died. FINDINGS In the genome-wide meta-analysis of three independent pneumonia cohorts (cohorts 1-3), common variants in the FER gene were strongly associated with survival (p=9·7 × 10(-8)). Further genotyping of the top associated SNP (rs4957796) in the additional cohort (cohort 4) resulted in a combined p value of 5·6 × 10(-8) (odds ratio 0·56, 95% CI 0·45-0·69). In a time-to-event analysis, each allele reduced the mortality over 28 days by 44% (hazard ratio for death 0·56, 95% CI 0·45-0·69; likelihood ratio test p=3·4 × 10(-9), after adjustment for age and stratification by cohort). Mortality was 9·5% in patients carrying the CC genotype, 15·2% in those carrying the TC genotype, and 25·3% in those carrying the TT genotype. No significant genetic associations were identified when patients with sepsis due to pneumonia and intra-abdominal infection were combined. INTERPRETATION We have identified common variants in the FER gene that associate with a reduced risk of death from sepsis due to pneumonia. The FER gene and associated molecular pathways are potential novel targets for therapy or prevention and candidates for the development of biomarkers for risk stratification. FUNDING European Commission and the Wellcome Trust.
Resumo:
BACKGROUND General practitioners (GPs) are in best position to suspect dementia. Mini-Mental State Examination (MMSE) and Clock Drawing Test (CDT) are widely used. Additional neurological tests may increase the accuracy of diagnosis. We aimed to evaluate diagnostic ability to detect dementia with a Short Smell Test (SST) and Palmo-Mental Reflex (PMR) in patients whose MMSE and CDT are normal, but who show signs of cognitive dysfunction. METHODS This was a 3.5-year cross-sectional observational study in the Memory Clinic of the University Department of Geriatrics in Bern, Switzerland. Participating patients with normal MMSE (>26 points) and CDT (>5 points) were referred by GPs because they suspected dementia. All were examined according to a standardized protocol. Diagnosis of dementia was based on DSM-IV TR criteria. We used SST and PMR to determine if they accurately detected dementia. RESULTS In our cohort, 154 patients suspected of dementia had normal MMSE and CDT test results. Of these, 17 (11 %) were demented. If SST or PMR were abnormal, sensitivity was 71 % (95 % CI 44-90 %), and specificity 64 % (95 % CI 55-72 %) for detecting dementia. If both tests were abnormal, sensitivity was 24 % (95 % CI 7-50 %), but specificity increased to 93 % (95 % CI 88-97 %). CONCLUSION Patients suspected of dementia, but with normal MMSE and CDT results, may benefit if SST and PMR are added as diagnostic tools. If both SST and PMR are abnormal, this is a red flag to investigate these patients further, even though their negative neuropsychological screening results.
Resumo:
OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.
Resumo:
OBJECTIVES The aim of this study was to quantify loss to follow-up (LTFU) in HIV care after delivery and to identify risk factors for LTFU, and implications for HIV disease progression and subsequent pregnancies. METHODS We used data on pregnancies within the Swiss HIV Cohort Study from 1996 to 2011. A delayed clinical visit was defined as > 180 days and LTFU as no visit for > 365 days after delivery. Logistic regression analysis was used to identify risk factors for LTFU. RESULTS A total of 695 pregnancies in 580 women were included in the study, of which 115 (17%) were subsequent pregnancies. Median maternal age was 32 years (IQR 28-36 years) and 104 (15%) women reported any history of injecting drug use (IDU). Overall, 233 of 695 (34%) women had a delayed visit in the year after delivery and 84 (12%) women were lost to follow-up. Being lost to follow-up was significantly associated with a history of IDU [adjusted odds ratio (aOR) 2.79; 95% confidence interval (CI) 1.32-5.88; P = 0.007] and not achieving an undetectable HIV viral load (VL) at delivery (aOR 2.42; 95% CI 1.21-4.85; P = 0.017) after adjusting for maternal age, ethnicity and being on antiretroviral therapy (ART) at conception. Forty-three of 84 (55%) women returned to care after LTFU. Half of them (20 of 41) with available CD4 had a CD4 count < 350 cells/μL and 15% (six of 41) a CD4 count < 200 cells/μL at their return. CONCLUSIONS A history of IDU and detectable HIV VL at delivery were associated with LTFU. Effective strategies are warranted to retain women in care beyond pregnancy and to avoid CD4 cell count decline. ART continuation should be advised especially if a subsequent pregnancy is planned.
Resumo:
BACKGROUND Chronic postsurgical pain (CPSP) is an important clinical problem. Prospective studies of the incidence, characteristics and risk factors of CPSP are needed. OBJECTIVES The objective of this study is to evaluate the incidence and risk factors of CPSP. DESIGN A multicentre, prospective, observational trial. SETTING Twenty-one hospitals in 11 European countries. PATIENTS Three thousand one hundred and twenty patients undergoing surgery and enrolled in the European registry PAIN OUT. MAIN OUTCOME MEASURES Pain-related outcome was evaluated on the first postoperative day (D1) using a standardised pain outcome questionnaire. Review at 6 and 12 months via e-mail or telephonic interview used the Brief Pain Inventory (BPI) and the DN4 (Douleur Neuropathique four questions). Primary endpoint was the incidence of moderate to severe CPSP (numeric rating scale, NRS ≥3/10) at 12 months. RESULTS For 1044 and 889 patients, complete data were available at 6 and 12 months. At 12 months, the incidence of moderate to severe CPSP was 11.8% (95% CI 9.7 to 13.9) and of severe pain (NRS ≥6) 2.2% (95% CI 1.2 to 3.3). Signs of neuropathic pain were recorded in 35.4% (95% CI 23.9 to 48.3) and 57.1% (95% CI 30.7 to 83.4) of patients with moderate and severe CPSP, respectively. Functional impairment (BPI) at 6 and 12 months increased with the severity of CPSP (P < 0.01) and presence of neuropathic characteristics (P < 0.001). Multivariate analysis identified orthopaedic surgery, preoperative chronic pain and percentage of time in severe pain on D1 as risk factors. A 10% increase in percentage of time in severe pain was associated with a 30% increase of CPSP incidence at 12 months. CONCLUSION The collection of data on CPSP was feasible within the European registry PAIN OUT. The incidence of moderate to severe CPSP at 12 months was 11.8%. Functional impairment was associated with CPSP severity and neuropathic characteristics. Risk factors for CPSP in the present study were chronic preoperative pain, orthopaedic surgery and percentage of time in severe pain on D1. TRIAL REGISTRATION Clinicaltrials.gov identifier: NCT01467102.
Resumo:
BACKGROUND Recommendations have differed nationally and internationally with respect to the best time to start antiretroviral therapy (ART). We compared effectiveness of three strategies for initiation of ART in high-income countries for HIV-positive individuals who do not have AIDS: immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL. METHODS We used data from the HIV-CAUSAL Collaboration of cohort studies in Europe and the USA. We included 55 826 individuals aged 18 years or older who were diagnosed with HIV-1 infection between January, 2000, and September, 2013, had not started ART, did not have AIDS, and had CD4 count and HIV-RNA viral load measurements within 6 months of HIV diagnosis. We estimated relative risks of death and of death or AIDS-defining illness, mean survival time, the proportion of individuals in need of ART, and the proportion of individuals with HIV-RNA viral load less than 50 copies per mL, as would have been recorded under each ART initiation strategy after 7 years of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders. FINDINGS Median CD4 count at diagnosis of HIV infection was 376 cells per μL (IQR 222-551). Compared with immediate initiation, the estimated relative risk of death was 1·02 (95% CI 1·01-1·02) when ART was started at a CD4 count less than 500 cells per μL, and 1·06 (1·04-1·08) with initiation at a CD4 count less than 350 cells per μL. Corresponding estimates for death or AIDS-defining illness were 1·06 (1·06-1·07) and 1·20 (1·17-1·23), respectively. Compared with immediate initiation, the mean survival time at 7 years with a strategy of initiation at a CD4 count less than 500 cells per μL was 2 days shorter (95% CI 1-2) and at a CD4 count less than 350 cells per μL was 5 days shorter (4-6). 7 years after diagnosis of HIV, 100%, 98·7% (95% CI 98·6-98·7), and 92·6% (92·2-92·9) of individuals would have been in need of ART with immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL, respectively. Corresponding proportions of individuals with HIV-RNA viral load less than 50 copies per mL at 7 years were 87·3% (87·3-88·6), 87·4% (87·4-88·6), and 83·8% (83·6-84·9). INTERPRETATION The benefits of immediate initiation of ART, such as prolonged survival and AIDS-free survival and increased virological suppression, were small in this high-income setting with relatively low CD4 count at HIV diagnosis. The estimated beneficial effect on AIDS is less than in recently reported randomised trials. Increasing rates of HIV testing might be as important as a policy of early initiation of ART. FUNDING National Institutes of Health.