948 resultados para SOCIAL SERVICES
Resumo:
AIM To assess the prevalence of vascular dementia, mixed dementia and Alzheimer's disease in patients with atrial fibrillation, and to evaluate the accuracy of the Hachinski ischemic score for these subtypes of dementia. METHODS A nested case-control study was carried out. A total of 103 of 784 consecutive patients evaluated for cognitive status at the Ambulatory Geriatric Clinic had a diagnosis of atrial fibrillation. Controls without atrial fibrillation were randomly selected from the remaining 681 patients using a 1:2 matching for sex, age and education. RESULTS The prevalence of vascular dementia was twofold in patients with atrial fibrillation compared with controls (21.4% vs 10.7%, P = 0.024). Alzheimer's disease was also more frequent in the group with atrial fibrillation (12.6% vs 7.3%, P = 0.046), whereas mixed dementia had a similar distribution. The Hachinski ischemic score poorly discriminated between dementia subtypes, with misclassification rates between 46% (95% CI 28-66) and 70% (95% CI 55-83). In patients with atrial fibrillation, these rates ranged from 55% (95% CI 32-77) to 69% (95% CI 39-91%). In patients in whom the diagnosis of dementia was excluded, the Hachinski ischemic score suggested the presence of vascular dementia in 11% and mixed dementia in 30%. CONCLUSIONS Vascular dementia and Alzheimer's disease, but not mixed dementia, are more prevalent in patients with atrial fibrillation. The discriminative accuracy of the Hachinski ischemic score for dementia subtypes in atrial fibrillation is poor, with a significant proportion of misclassifications.
Primary prophylaxis for venous thromboembolism in ambulatory cancer patients receiving chemotherapy.
Resumo:
BACKGROUND Venous thromboembolism (VTE) often complicates the clinical course of cancer. The risk is further increased by chemotherapy, but the safety and efficacy of primary thromboprophylaxis in cancer patients treated with chemotherapy is uncertain. This is an update of a review first published in February 2012. OBJECTIVES To assess the efficacy and safety of primary thromboprophylaxis for VTE in ambulatory cancer patients receiving chemotherapy compared with placebo or no thromboprophylaxis. SEARCH METHODS For this update, the Cochrane Peripheral Vascular Diseases Group Trials Search Co-ordinator searched the Specialised Register (last searched May 2013), CENTRAL (2013, Issue 5), and clinical trials registries (up to June 2013). SELECTION CRITERIA Randomised controlled trials (RCTs) comparing any oral or parenteral anticoagulant or mechanical intervention to no intervention or placebo, or comparing two different anticoagulants. DATA COLLECTION AND ANALYSIS Data were extracted on methodological quality, patients, interventions, and outcomes including symptomatic VTE and major bleeding as the primary effectiveness and safety outcomes, respectively. MAIN RESULTS We identified 12 additional RCTs (6323 patients) in the updated search so that this update considered 21 trials with a total of 9861 patients, all evaluating pharmacological interventions and performed mainly in patients with advanced cancer. Overall, the risk of bias varied from low to high. One large trial of 3212 patients found a 64% (risk ratio (RR) 0.36, 95% confidence interval (CI) 0.22 to 0.60) reduction of symptomatic VTE with the ultra-low molecular weight heparin (uLMWH) semuloparin relative to placebo, with no apparent difference in major bleeding (RR 1.05, 95% CI 0.55 to 2.00). LMWH, when compared with inactive control, significantly reduced the incidence of symptomatic VTE (RR 0.53, 95% CI 0.38 to 0.75; no heterogeneity, Tau(2) = 0%) with similar rates of major bleeding events (RR 1.30, 95% CI 0.75 to 2.23). In patients with multiple myeloma, LMWH was associated with a significant reduction in symptomatic VTE when compared with the vitamin K antagonist warfarin (RR 0.33, 95% CI 0.14 to 0.83), while the difference between LMWH and aspirin was not statistically significant (RR 0.51, 95% CI 0.22 to 1.17). No major bleeding was observed in the patients treated with LMWH or warfarin and in less than 1% of those treated with aspirin. Only one study evaluated unfractionated heparin against inactive control and found an incidence of major bleeding of 1% in both study groups while not reporting on VTE. When compared with placebo, warfarin was associated with a statistically insignificant reduction of symptomatic VTE (RR 0.15, 95% CI 0.02 to 1.20). Antithrombin, evaluated in one study involving paediatric patients, had no significant effect on VTE nor major bleeding when compared with inactive control. The new oral factor Xa inhibitor apixaban was evaluated in a phase-II dose finding study that suggested a promising low rate of major bleeding (2.1% versus 3.3%) and symptomatic VTE (1.1% versus 10%) in comparison with placebo. AUTHORS' CONCLUSIONS In this update, we confirmed that primary thromboprophylaxis with LMWH significantly reduced the incidence of symptomatic VTE in ambulatory cancer patients treated with chemotherapy. In addition, the uLMWH semuloparin significantly reduced the incidence of symptomatic VTE. However, the broad confidence intervals around the estimates for major bleeding suggest caution in the use of anticoagulation and mandate additional studies to determine the risk to benefit ratio of anticoagulants in this setting. Despite the encouraging results of this review, routine prophylaxis in ambulatory cancer patients cannot be recommended before safety issues are adequately addressed.
Resumo:
BACKGROUND Evidence-based guidelines are needed to guide effective long-term follow-up (LTFU) of childhood cancer survivors (CCS) at risk of late adverse effects (LAEs). We aimed to ascertain the use of LTFU guidelines throughout Europe, and seek views on the need for pan-European LTFU guidelines. PROCEDURES One expert clinician from each of 44 European countries was invited to participate in an online survey. Information was sought regarding the use and content of LTFU guidelines in the respondent's centre and country, and their views about developing pan-European LTFU guidelines. RESULTS Thirty-one countries (70%) responded, including 24 of 26 full EU countries (92%). LTFU guidelines were implemented nationally in 17 countries (55%). All guidelines included recommendations about physical LAEs, specific risk groups and frequency of surveillance, and the majority about psychosocial LAEs (70%), and healthy lifestyle promotion (65%). A minority of guidelines described recommendations about transition to age-appropriate LTFU services (22%), where LTFU should be performed (22%) and by whom (30%). Most respondents (94%) agreed on the need for pan-European LTFU guidelines, specifically including recommendations about surveillance for specific physical LAEs (97%), action to be taken if a specific LAE is detected (90%), minimum requirements for LTFU (93%), transition and health promotion (both 87%). CONCLUSIONS Guidelines are not universally used throughout Europe. However, there is strong support for developing pan-European LTFU guidelines for CCS. PanCareSurFup (www.pancare.eu) will collaborate with partners to develop such guidelines, including recommendations for hitherto relatively neglected topics, such as minimum LTFU requirements, transition and health promotion.
Resumo:
INTRODUCTION According to reports from observational databases, classic AIDS-defining opportunistic infections (ADOIs) occur in patients with CD4 counts above 500/µL on and off cART. Adjudication of these events is usually not performed. However, ADOIs are often used as endpoints, for example, in analyses on when to start cART. MATERIALS AND METHODS In the database, Swiss HIV Cohort Study (SHCS) database, we identified 91 cases of ADOIs that occurred from 1996 onwards in patients with the nearest CD4 count >500/µL. Cases of tuberculosis and recurrent bacterial pneumonia were excluded as they also occur in non-immunocompromised patients. Chart review was performed in 82 cases, and in 50 cases we identified CD4 counts within six months before until one month after ADOI and had chart review material to allow an in-depth review. In these 50 cases, we assessed whether (1) the ADOI fulfilled the SHCS diagnostic criteria (www.shcs.ch), and (2) HIV infection with CD4 >500/µL was the main immune-compromising condition to cause the ADOI. Adjudication of cases was done by two experienced clinicians who had to agree on the interpretation. RESULTS More than 13,000 participants were followed in SHCS in the period of interest. Twenty-four (48%) of the chart-reviewed 50 patients with ADOI and CD4 >500/µL had an HIV RNA <400 copies/mL at the time of ADOI. In the 50 cases, candida oesophagitis was the most frequent ADOI in 30 patients (60%) followed by pneumocystis pneumonia and chronic ulcerative HSV disease (Table 1). Overall chronic HIV infection with a CD4 count >500/µL was the likely explanation for the ADOI in only seven cases (14%). Other reasons (Table 1) were ADOIs occurring during primary HIV infection in 5 (10%) cases, unmasking IRIS in 1 (2%) case, chronic HIV infection with CD4 counts <500/µL near the ADOI in 13 (26%) cases, diagnosis not according to SHCS diagnostic criteria in 7 (14%) cases and most importantly other additional immune-compromising conditions such as immunosuppressive drugs in 14 (34%). CONCLUSIONS In patients with CD4 counts >500/ µL, chronic HIV infection is the cause of ADOIs in only a minority of cases. Other immuno-compromising conditions are more likely explanations in one-third of the patients, especially in cases of candida oesophagitis. ADOIs in HIV patients with high CD4 counts should be used as endpoints only with much caution in studies based on observational databases.
Resumo:
OBJECTIVES: Inequalities and inequities in health are an important public health concern. In Switzerland, mortality in the general population varies according to the socio-economic position (SEP) of neighbourhoods. We examined the influence of neighbourhood SEP on presentation and outcomes in HIV-positive individuals in the era of combination antiretroviral therapy (cART). METHODS: The neighbourhood SEP of patients followed in the Swiss HIV Cohort Study (SHCS) 2000-2013 was obtained on the basis of 2000 census data on the 50 nearest households (education and occupation of household head, rent, mean number of persons per room). We used Cox and logistic regression models to examine the probability of late presentation, virologic response to cART, loss to follow-up and death across quintiles of neighbourhood SEP. RESULTS: A total of 4489 SHCS participants were included. Presentation with advanced disease [CD4 cell count <200 cells/μl or AIDS] and with AIDS was less common in neighbourhoods of higher SEP: the age and sex-adjusted odds ratio (OR) comparing the highest with the lowest quintile of SEP was 0.71 [95% confidence interval (95% CI) 0.58-0.87] and 0.59 (95% CI 0.45-0.77), respectively. An undetectable viral load at 6 months of cART was more common in the highest than in the lowest quintile (OR 1.52; 95% CI 1.14-2.04). Loss to follow-up, mortality and causes of death were not associated with neighbourhood SEP. CONCLUSION: Late presentation was more common and virologic response to cART less common in HIV-positive individuals living in neighbourhoods of lower SEP, but in contrast to the general population, there was no clear trend for mortality.
Resumo:
BACKGROUND The risk of Kaposi sarcoma (KS) among HIV-infected persons on antiretroviral therapy (ART) is not well defined in resource-limited settings. We studied KS incidence rates and associated risk factors in children and adults on ART in Southern Africa. METHODS We included patient data of 6 ART programs in Botswana, South Africa, Zambia, and Zimbabwe. We estimated KS incidence rates in patients on ART measuring time from 30 days after ART initiation to KS diagnosis, last follow-up visit, or death. We assessed risk factors (age, sex, calendar year, WHO stage, tuberculosis, and CD4 counts) using Cox models. FINDINGS We analyzed data from 173,245 patients (61% female, 8% children aged <16 years) who started ART between 2004 and 2010. Five hundred and sixty-four incident cases were diagnosed during 343,927 person-years (pys). The overall KS incidence rate was 164/100,000 pys [95% confidence interval (CI): 151 to 178]. The incidence rate was highest 30-90 days after ART initiation (413/100,000 pys; 95% CI: 342 to 497) and declined thereafter [86/100,000 pys (95% CI: 71 to 105), >2 years after ART initiation]. Male sex [adjusted hazard ratio (HR): 1.34; 95% CI: 1.12 to 1.61], low current CD4 counts (≥500 versus <50 cells/μL, adjusted HR: 0.36; 95% CI: 0.23 to 0.55), and age (5-9 years versus 30-39 years, adjusted HR: 0.20; 95% CI: 0.05 to 0.79) were relevant risk factors for developing KS. INTERPRETATION Despite ART, KS risk in HIV-infected persons in Southern Africa remains high. Early HIV testing and maintaining high CD4 counts is needed to further reduce KS-related morbidity and mortality.
Resumo:
In several studies of antiretroviral treatment (ART) programs for persons with human immunodeficiency virus infection, investigators have reported that there has been a higher rate of loss to follow-up (LTFU) among patients initiating ART in recent years than among patients who initiated ART during earlier time periods. This finding is frequently interpreted as reflecting deterioration of patient retention in the face of increasing patient loads. However, in this paper we demonstrate by simulation that transient gaps in follow-up could lead to bias when standard survival analysis techniques are applied. We created a simulated cohort of patients with different dates of ART initiation. Rates of ART interruption, ART resumption, and mortality were assumed to remain constant over time, but when we applied a standard definition of LTFU, the simulated probability of being classified LTFU at a particular ART duration was substantially higher in recently enrolled cohorts. This suggests that much of the apparent trend towards increased LTFU may be attributed to bias caused by transient interruptions in care. Alternative statistical techniques need to be used when analyzing predictors of LTFU-for example, using "prospective" definitions of LTFU in place of "retrospective" definitions. Similar considerations may apply when analyzing predictors of LTFU from treatment programs for other chronic diseases.
In the aftermath of medical error : Caring for patients, family, and the healthcare workers involved
Resumo:
Medical errors, in particular those resulting in harm, pose a serious situation for patients ("first victims") and the healthcare workers involved ("second victims") and can have long-lasting and distressing consequences. To prevent a second traumatization, appropriate and empathic interaction with all persons involved is essential besides error analysis. Patients share a nearly universal, broad preference for a complete disclosure of incidents, regardless of age, gender, or education. This includes the personal, timely and unambiguous disclosure of the adverse event, information relating to the event, its causes and consequences, and an apology and sincere expression of regret. While the majority of healthcare professionals generally support and honest and open disclosure of adverse events, they also face various barriers which impede the disclosure (e.g., fear of legal consequences). Despite its essential importance, disclosure of adverse events in practice occurs in ways that are rarely acceptable to patients and their families. The staff involved often experiences acute distress and an intense emotional response to the event, which may become chronic and increase the risk of depression, burnout and post-traumatic stress disorders. Communication with peers is vital for people to be able to cope constructively and protectively with harmful errors. Survey studies among healthcare workers show, however, that they often do not receive sufficient individual and institutional support. Healthcare organizations should prepare for medical errors and harmful events and implement a communication plan and a support system that covers the requirements and different needs of patients and the staff involved.
Resumo:
Eine Person erleidet bei einem Unfall schwere Hirnschädigungen und ist fortan kaum mehr wieder zu erkennen. Sie leidet an Erinnerungslücken und ihre Persönlichkeit hat sich durch den Vorfall schwerwiegend verändert. Diese Veränderung ist so ausgeprägt, dass ihre Freunde sie als "anderen Menschen" bezeichnen. Ist diese Beschreibung zutreffend? Wie können wir wissen, ob sich die alte Person bloss verändert hat, oder ob die alte Person aufgehört hat zu existieren und wirklich eine neue Person entstanden ist? Dies ist das Problem der personalen Identität. Das Problem ist nicht bloss ein Ausnahmephänomen, sondern zeigt sich in zahlreichen weiteren Situationen, beispielsweise bei Fällen dissoziativer Identitätsstörung, schweren Alzheimererkrankungen und nach gewissen neurochirurgischen Eingriffen. In dieser Dissertation wird die Relevanz des Problems der personalen Identität für das Strafrecht untersucht. Anhand von Fallbeispielen, Gerichtsentscheiden und der philosophischen Literatur zur personalen Identität wird eine strafrechtliche Lösung erarbeitet, die auf verschiedene Problemfälle im Bereich der allgemeinen strafrechtlichen Zurechnung, der strafrechtlichen Irrtümer, der Patientenverfügung und auf weitere Problemkonstellationen anwendbar ist. Dabei wird das schweizerische Strafrecht und Strafprozessrecht berücksichtigt. Die vorgeschlagene Lösung impliziert, dass in bestimmten besonders schwierigen Zweifelsfällen der personalen Identität der Angeschuldigte gemäss dem Grundsatz in dubio pro reo freigesprochen werden muss.
Resumo:
En dépit de sa croissance économique, le Burkina Faso reste l’un des pays les plus pauvres du monde. Dans les villes, entre 20 et 30% des moins de 30 ans sont sans travail véritablement rémunéré. Beaucoup d’entre eux vivent en situation de contrat entre les générations à l’envers, logés et nourris par leurs parents. Ce climat de précarité constante et d’incertitude quotidienne conduit à des formes spécifiques de fantaisies et d’actions. Les entretiens avec de jeunes hommes et femmes de Bobo-Dioulasso que nous avons menés à plusieurs reprises sur trois ans (étude longitudinale) mettent en lumière les conditions qui facilitent l’action en situation d’incertitude quotidienne.
Resumo:
BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
Suppression of cyclic activity in cattle is often desired in alpine farming and for feedlot cattle not intended for breeding. A cattle-specific anti-GnRH vaccination (Bopriva, Zoetis Australia Ltd., West Ryde, Australia) is approved for use in heifers and bulls in New Zealand, Australia, Mexico, Brazil, Argentina, Turkey, and Peru. Eleven healthy, cyclic Swiss Fleckvieh cows were included in the study and vaccinated twice with Bopriva 4wk apart. Injection site, rectal body temperature, and heart and respiratory rates were recorded before and 3d following each vaccination. Blood samples were taken weekly for progesterone and estrogen analysis and to determine GnRH antibody titer. Ovaries were examined weekly, using ultrasound to count the number of follicles and identify the presence of a corpus luteum. Thirty weeks after the first vaccination, the cows were subjected to a controlled internal drug-releasing device-based Select-Synch treatment. The GnRH antibody titers increased after the second vaccination and peaked 2wk later. Estrogen levels were not influenced by vaccination, and progesterone level decreased in 7 of 11 cows up to 3wk after the second vaccination and remained low for 10 to 15wk following the second vaccination. The number of class I follicles (diameter ≤5mm) was not influenced by vaccination, whereas the number of class II follicles (diameter 6-9mm) decreased between 7 and 16wk after the first vaccination. Class III follicles (diameter >9mm) were totally absent during this period in most cows. The median period until recurrence of class III follicles was 78d from the day of the second vaccination (95% confidence interval: 60-92d). After vaccination, all cows showed swelling and pain at the injection site, and these reactions subsided within 2wk. Body temperature and heart and respiratory rates increased after the first and second vaccinations and returned to normal values within 2d of each vaccination. The cows in our study were not observed to display estrus behavior until 30wk after the first vaccination. Therefore, a Select-Synch protocol was initiated at that time. Ten cows became pregnant after the first insemination (the remaining cow was reinseminated once until confirmed pregnancy). Bopriva induced a reliable and reversible suppression of reproductive cyclicity for more than 2mo. The best practical predictor for the length of the anestrus period was the absence of class III follicles.
Resumo:
BACKGROUND The Cochrane risk of bias (RoB) tool has been widely embraced by the systematic review community, but several studies have reported that its reliability is low. We aim to investigate whether training of raters, including objective and standardized instructions on how to assess risk of bias, can improve the reliability of this tool. We describe the methods that will be used in this investigation and present an intensive standardized training package for risk of bias assessment that could be used by contributors to the Cochrane Collaboration and other reviewers. METHODS/DESIGN This is a pilot study. We will first perform a systematic literature review to identify randomized clinical trials (RCTs) that will be used for risk of bias assessment. Using the identified RCTs, we will then do a randomized experiment, where raters will be allocated to two different training schemes: minimal training and intensive standardized training. We will calculate the chance-corrected weighted Kappa with 95% confidence intervals to quantify within- and between-group Kappa agreement for each of the domains of the risk of bias tool. To calculate between-group Kappa agreement, we will use risk of bias assessments from pairs of raters after resolution of disagreements. Between-group Kappa agreement will quantify the agreement between the risk of bias assessment of raters in the training groups and the risk of bias assessment of experienced raters. To compare agreement of raters under different training conditions, we will calculate differences between Kappa values with 95% confidence intervals. DISCUSSION This study will investigate whether the reliability of the risk of bias tool can be improved by training raters using standardized instructions for risk of bias assessment. One group of inexperienced raters will receive intensive training on risk of bias assessment and the other will receive minimal training. By including a control group with minimal training, we will attempt to mimic what many review authors commonly have to do, that is-conduct risk of bias assessment in RCTs without much formal training or standardized instructions. If our results indicate that an intense standardized training does improve the reliability of the RoB tool, our study is likely to help improve the quality of risk of bias assessments, which is a central component of evidence synthesis.