363 resultados para proportion of fibers
Resumo:
Juvenile dermatomyositis (JDM) is an immune-mediated inflammatory disease affecting the microvasculature of skin and muscle. CD4+CD25+FOXP3+ regulatory T cells (Tregs) are key regulators of immune homeostasis. A role for Tregs in JDM pathogenesis has not yet been established. Here, we explored Treg presence and function in peripheral blood and muscle of JDM patients. We analyzed number, phenotype and function of Tregs in blood from JDM patients by flow cytometry and in vitro suppression assays, in comparison to healthy controls and disease controls (Duchenne's Muscular Dystrophy). Presence of Tregs in muscle was analyzed by immunohistochemistry. Overall, Treg percentages in peripheral blood of JDM patients were similar compared to both control groups. Muscle biopsies of new onset JDM patients showed increased infiltration of numbers of T cells compared to Duchenne's muscular dystrophy. Both in JDM and Duchenne's muscular dystrophy the proportion of FOXP3+ T cells in muscles were increased compared to JDM peripheral blood. Interestingly, JDM is not a self-remitting disease, suggesting that the high proportion of Tregs in inflamed muscle do not suppress inflammation. In line with this, peripheral blood Tregs of active JDM patients were less capable of suppressing effector T cell activation in vitro, compared to Tregs of JDM in clinical remission. These data show a functional impairment of Tregs in a proportion of patients with active disease, and suggest a regulatory role for Tregs in JDM inflammation.
Resumo:
The following is a brief statement of the 2003 European Society of Hypertension (ESH)-European Society of Cardiology (ESC) guidelines for the management of arterial hypertension.The continuous relationship between the level of blood pressure and cardiovascular risk makes the definition of hypertension arbitrary. Since risk factors cluster in hypertensive individuals, risk stratification should be made and decision about the management should not be based on blood pressure alone, but also according to the presence or absence of other risk factors, target organ damage, diabetes, and cardiovascular or renal damage, as well as on other aspects of the patient's personal, medical and social situation. Blood pressure values measured in the doctor's office or the clinic should commonly be used as reference. Ambulatory blood pressure monitoring may have clinical value, when considerable variability of office blood pressure is found over the same or different visits, high office blood pressure is measured in subjects otherwise at low global cardiovascular risk, there is marked discrepancy between blood pressure values measured in the office and at home, resistance to drug treatment is suspected, or research is involved. Secondary hypertension should always be investigated.The primary goal of treatment of patient with high blood pressure is to achieve the maximum reduction in long-term total risk of cardiovascular morbidity and mortality. This requires treatment of all the reversible factors identified, including smoking, dislipidemia, or diabetes, and the appropriate management of associated clinical conditions, as well as treatment of the raised blood pressure per se. On the basis of current evidence from trials, it can be recommended that blood pressure, both systolic and diastolic, be intensively lowered at least below 140/90 mmHg and to definitely lower values, if tolerated, in all hypertensive patients, and below 130/80 mmHg in diabetics.Lifestyle measures should be instituted whenever appropriate in all patients, including subjects with high normal blood pressure and patients who require drug treatment. The purpose is to lower blood pressure and to control other risk factors and clinical conditions present.In most, if not all, hypertensive patients, therapy should be started gradually, and target blood pressure achieved progressively through several weeks. To reach target blood pressure, it is likely that a large proportion of patients will require combination therapy with more than one agent. The main benefits of antihypertensive therapy are due to lowering of blood pressure per se. There is also evidence that specific drug classes may differ in some effect or in special groups of patients. The choice of drugs will be influenced by many factors, including previous experience of the patient with antihypertensive agents, cost of drugs, risk profile, presence or absence of target organ damage, clinical cardiovascular or renal disease or diabetes, patient's preference.
Resumo:
OBJECTIVE: To evaluate the public health impact of statin prescribing strategies based on the Justification for the Use of Statins in Primary Prevention: An Intervention Trial Evaluating Rosuvastatin Study (JUPITER). METHODS: We studied 2268 adults aged 35-75 without cardiovascular disease in a population-based study in Switzerland in 2003-2006. We assessed the eligibility for statins according to the Adult Treatment Panel III (ATPIII) guidelines, and by adding "strict" (hs-CRP≥2.0mg/L and LDL-cholesterol <3.4mmol/L), and "extended" (hs-CRP≥2.0mg/L alone) JUPITER-like criteria. We estimated the proportion of CHD deaths potentially prevented over 10years in the Swiss population. RESULTS: Fifteen % were already taking statins, 42% were eligible by ATPIII guidelines, 53% by adding "strict", and 62% by adding "extended" criteria, with a total of 19% newly eligible. The number needed to treat with statins to avoid one CHD death over 10years was 38 for ATPIII, 84 for "strict" and 92 for "extended" JUPITER-like criteria. ATPIII would prevent 17% of CHD deaths, compared with 20% for ATPIII+"strict" and 23% for ATPIII + "extended" criteria (+6%). CONCLUSION: Implementing JUPITER-like strategies would make statin prescribing for primary prevention more common and less efficient than it is with current guidelines.
Resumo:
BACKGROUND: The outcome of diffuse large B-cell lymphoma has been substantially improved by the addition of the anti-CD20 monoclonal antibody rituximab to chemotherapy regimens. We aimed to assess, in patients aged 18-59 years, the potential survival benefit provided by a dose-intensive immunochemotherapy regimen plus rituximab compared with standard treatment plus rituximab. METHODS: We did an open-label randomised trial comparing dose-intensive rituximab, doxorubicin, cyclophosphamide, vindesine, bleomycin, and prednisone (R-ACVBP) with subsequent consolidation versus standard rituximab, doxorubicin, cyclophosphamide, vincristine, and prednisone (R-CHOP). Random assignment was done with a computer-assisted randomisation-allocation sequence with a block size of four. Patients were aged 18-59 years with untreated diffuse large B-cell lymphoma and an age-adjusted international prognostic index equal to 1. Our primary endpoint was event-free survival. Our analyses of efficacy and safety were of the intention-to-treat population. This study is registered with ClinicalTrials.gov, number NCT00140595. FINDINGS: One patient withdrew consent before treatment and 54 did not complete treatment. After a median follow-up of 44 months, our 3-year estimate of event-free survival was 81% (95% CI 75-86) in the R-ACVBP group and 67% (59-73) in the R-CHOP group (hazard ratio [HR] 0·56, 95% CI 0·38-0·83; p=0·0035). 3-year estimates of progression-free survival (87% [95% CI, 81-91] vs 73% [66-79]; HR 0·48 [0·30-0·76]; p=0·0015) and overall survival (92% [87-95] vs 84% [77-89]; HR 0·44 [0·28-0·81]; p=0·0071) were also increased in the R-ACVBP group. 82 (42%) of 196 patients in the R-ACVBP group experienced a serious adverse event compared with 28 (15%) of 183 in the R-CHOP group. Grade 3-4 haematological toxic effects were more common in the R-ACVBP group, with a higher proportion of patients experiencing a febrile neutropenic episode (38% [75 of 196] vs 9% [16 of 183]). INTERPRETATION: Compared with standard R-CHOP, intensified immunochemotherapy with R-ACVBP significantly improves survival of patients aged 18-59 years with diffuse large B-cell lymphoma with low-intermediate risk according to the International Prognostic Index. Haematological toxic effects of the intensive regimen were raised but manageable. FUNDING: Groupe d'Etudes des Lymphomes de l'Adulte and Amgen.
Resumo:
Aim We examined whether species occurrences are primarily limited by physiological tolerance in the abiotically more stressful end of climatic gradients (the asymmetric abiotic stress limitation (AASL) hypothesis) and the geographical predictions of this hypothesis: abiotic stress mainly determines upper-latitudinal and upper-altitudinal species range limits, and the importance of abiotic stress for these range limits increases the further northwards and upwards a species occurs. Location Europe and the Swiss Alps. Methods The AASL hypothesis predicts that species have skewed responses to climatic gradients, with a steep decline towards the more stressful conditions. Based on presence-absence data we examined the shape of plant species responses (measured as probability of occurrence) along three climatic gradients across latitudes in Europe (1577 species) and altitudes in the Swiss Alps (284 species) using Huisman-Olff-Fresco, generalized linear and generalized additive models. Results We found that almost half of the species from Europe and one-third from the Swiss Alps showed responses consistent with the predictions of the AASL hypothesis. Cold temperatures and a short growing season seemed to determine the upper-latitudinal and upper-altitudinal range limits of up to one-third of the species, while drought provided an important constraint at lower-latitudinal range limits for up to one-fifth of the species. We found a biome-dependent influence of abiotic stress and no clear support for abiotic stress as a stronger upper range-limit determinant for species with higher latitudinal and altitudinal distributions. However, the overall influence of climate as a range-limit determinant increased with latitude. Main conclusions Our results support the AASL hypothesis for almost half of the studied species, and suggest that temperature-related stress controls the upper-latitudinal and upper-altitudinal range limits of a large proportion of these species, while other factors including drought stress may be important at the lower range limits.
Resumo:
BACKGROUND: Frailty is an indicator of health status in old age. Its frequency has been described mainly for North America; comparable data from other countries are lacking. Here we report on the prevalence of frailty in 10 European countries included in a population-based survey. METHODS: Cross-sectional analysis of 18,227 randomly selected community-dwelling individuals 50 years of age and older, enrolled in the Survey of Health, Aging and Retirement in Europe (SHARE) in 2004. Complete data for assessing a frailty phenotype (exhaustion, shrinking, weakness, slowness, and low physical activity) were available for 16,584 participants. Prevalences of frailty and prefrailty were estimated for individuals 50-64 years and 65 years of age and older from each country. The latter group was analyzed further after excluding disabled individuals. We estimated country effects in this subset using multivariate logistic regression models, controlling first for age, gender, and then demographics and education. RESULTS: The proportion of frailty (three to five criteria) or prefrailty (one to two criteria) was higher in southern than in northern Europe. International differences in the prevalences of frailty and prefrailty for 65 years and older group persisted after excluding the disabled. Demographic characteristics did not account for international differences; however, education was associated with frailty. Controlling for education, age and gender diminished the effects of residing in Italy and Spain. CONCLUSIONS: A higher prevalence of frailty in southern countries is consistent with previous findings of a north-south gradient for other health indicators in SHARE. Our data suggest that socioeconomic factors like education contribute to these differences in frailty and prefrailty.
Resumo:
Background: The DEFUSE (n_74) and EPITHET (n_101) studies have in common that a baseline MRI was obtained prior to treatment (tPA in DEFUSE; tPA or placebo in EPITHET) in the 3-6 hour time-window. There were however important methodological differences between the studies. A standardized reanalysis of pooled data was undertaken to determine the effect of these differences on baseline characteristics and study outcomes. Methods: To standardize the studies 1) the DWI and PWI source images were reprocessed and segmented using automated image processing software (RAPID); 2) patients were categorized according to their baseline MRI profile as either Target Mismatch (PWITmax_6/DWI ratio_ 1.8 and an absolute mismatch _15mL), Malignant (DWI or PWITmax_10 lesion _ 100 mL), or No Mismatch. 3) favorable clinical response was defined as NIHSS score of 0-1 or a _8 points improvement on the NIHSSS at day 90. Results: Prior to standardization there was no difference in the proportion of Target Mismatch patients between EPITHET and DEFUSE (54% vs 49%, p_0.6), but the EPITHET study had more patients with the Malignant profile than DEFUSE (35% vs 9%, p_0.01) and fewer patients that had No Mismatch (11% vs 42%, p_0.01). These differences in baseline MRI profiles between EPITHET and DEFUSE were largely eliminated by standardized processing of PWI and DWI images with RAPID software (Target Mismatch 49% vs 48%; Malignant 15% vs 8%; No Mismatch 36% vs 25%; p_NS for all comparisons) Reperfusion was strongly associated with a favorable clinical response in mismatch patients (figure). This relationship was not affected by the standardization procedures (pooled odds ratio of 8.8 based on original data and 6.6 based on standardized data). Conclusion: Standardization of image analyses procedures in acute stroke is important as non-standardized techniques introduce significant variability in DWI and PWI imaging characteristics. Despite methodological differences, the DEFUSE and EPITHET studies show a consistent and robust association between reperfusion and favorable clinical response in Target Mismatch patients regardless of standardization. These data support an RCT of iv tPA in the 3-6 hour time-window for Target Mismatch patients identified using RAPID.
Resumo:
1. Costs of reproduction lie at the core of basic ecological and evolutionary theories, and their existence is commonly invoked to explain adaptive processes. Despite their sheer importance, empirical evidence for the existence and quantification of costs of reproduction in tree species comes mostly from correlational studies, while more comprehensive approaches remain missing. Manipulative experiments are a preferred approach to study cost of reproduction, as they allow controlling for otherwise inherent confounding factors like size or genetic background. 2. Here, we conducted a manipulative experiment in a Pinus halepensis common garden, removing developing cones from a group of trees and comparing growth and reproduction after treatment with a control group. We also estimated phenotypic and genetic correlations between reproductive and vegetative traits. 3. Manipulated trees grew slightly more than control trees just after treatment, with just a transient, marginally non-significant difference. By contrast, larger differences were observed for the number of female cones initiated 1 year after treatment, with an increase of 70% more cones in the manipulated group. Phenotypic and genetic correlations showed that smaller trees invested a higher proportion of their resources in reproduction, compared with larger trees, which could be interpreted as an indirect evidence for costs of reproduction. 4. Synthesis. This research showed a high impact of current reproduction on reproductive potential, even when not significant on vegetative growth. This has strong implications for how we understand adaptive strategies in forest trees and should encourage further interest on their still poorly known reproductive life-history traits.
Resumo:
OBJECTIVES: Persons from sub-Saharan Africa (SSA) are increasingly enrolled in the Swiss HIV Cohort Study (SHCS). Cohorts from other European countries showed higher rates of viral failure among their SSA participants. We analyzed long-term outcomes of SSA versus North Western European participants. DESIGN: We analyzed data of the SHCS, a nation-wide prospective cohort study of HIV-infected adults at 7 sites in Switzerland. METHODS: SSA and North Western European participants were included if their first treatment combination consisted of at least 3 antiretroviral drugs (cART), if they had at least 1 follow-up visit, did not report active injecting drug use, and did not start cART with CD4 counts >200 cells per microliter during pregnancy. Early viral response, CD4 cell recovery, viral failure, adherence, discontinuation from SHCS, new AIDS-defining events, and survival were analyzed using linear regression and Cox proportional hazard models. RESULTS: The proportion of participants from SSA within the SHCS increased from 2.6% (<1995) to 20.8% (2005-2009). Of 4656 included participants, 808 (17.4%) were from SSA. Early viral response (6 months) and rate of viral failure in an intent-to-stay-on-cART approach were similar. However, SSA participants had a higher risk of viral failure on cART (adjusted hazard ratio: 2.03, 95% confidence interval: 1.50 to 2.75). Self-reported adherence was inferior for SSA. There was no increase of AIDS-defining events or mortality in SSA participants. CONCLUSIONS: Increased attention must be given to factors negatively influencing adherence to cART in participants from SSA to guarantee equal longer-term results on cART.
Resumo:
STUDY DESIGN.: Retrospective radiologic study on a prospective patient cohort. OBJECTIVE.: To devise a qualitative grading of lumbar spinal stenosis (LSS), study its reliability and clinical relevance. SUMMARY OF BACKGROUND DATA.: Radiologic stenosis is assessed commonly by measuring dural sac cross-sectional area (DSCA). Great variation is observed though in surfaces recorded between symptomatic and asymptomatic individuals. METHODS.: We describe a 7-grade classification based on the morphology of the dural sac as observed on T2 axial magnetic resonance images based on the rootlet/cerebrospinal fluid ratio. Grades A and B show cerebrospinal fluid presence while grades C and D show none at all. The grading was applied to magnetic resonance images of 95 subjects divided in 3 groups as follows: 37 symptomatic LSS surgically treated patients; 31 symptomatic LSS conservatively treated patients (average follow-up, 2.5 and 3.1 years); and 27 low back pain (LBP) sufferers. DSCA was also digitally measured. We studied intra- and interobserver reliability, distribution of grades, relation between morphologic grading and DSCA, as well relation between grades, DSCA, and Oswestry Disability Index. RESULTS.: Average intra- and interobserver agreement was substantial and moderate, respectively (k = 0.65 and 0.44), whereas they were substantial for physicians working in the study originating unit. Surgical patients had the smallest DSCA. A larger proportion of C and D grades was observed in the surgical group. Surface measurementsresulted in overdiagnosis of stenosis in 35 patients and under diagnosis in 12. No relation could be found between stenosis grade or DSCA and baseline Oswestry Disability Index or surgical result. C and D grade patients were more likely to fail conservative treatment, whereas grades A and B were less likely to warrant surgery. CONCLUSION.: The grading defines stenosis in different subjects than surface measurements alone. Since it mainly considers impingement of neural tissue it might be a more appropriate clinical and research tool as well as carrying a prognostic value.
Resumo:
Genetic variants influence the risk to develop certain diseases or give rise to differences in drug response. Recent progresses in cost-effective, high-throughput genome-wide techniques, such as microarrays measuring Single Nucleotide Polymorphisms (SNPs), have facilitated genotyping of large clinical and population cohorts. Combining the massive genotypic data with measurements of phenotypic traits allows for the determination of genetic differences that explain, at least in part, the phenotypic variations within a population. So far, models combining the most significant variants can only explain a small fraction of the variance, indicating the limitations of current models. In particular, researchers have only begun to address the possibility of interactions between genotypes and the environment. Elucidating the contributions of such interactions is a difficult task because of the large number of genetic as well as possible environmental factors.In this thesis, I worked on several projects within this context. My first and main project was the identification of possible SNP-environment interactions, where the phenotypes were serum lipid levels of patients from the Swiss HIV Cohort Study (SHCS) treated with antiretroviral therapy. Here the genotypes consisted of a limited set of SNPs in candidate genes relevant for lipid transport and metabolism. The environmental variables were the specific combinations of drugs given to each patient over the treatment period. My work explored bioinformatic and statistical approaches to relate patients' lipid responses to these SNPs, drugs and, importantly, their interactions. The goal of this project was to improve our understanding and to explore the possibility of predicting dyslipidemia, a well-known adverse drug reaction of antiretroviral therapy. Specifically, I quantified how much of the variance in lipid profiles could be explained by the host genetic variants, the administered drugs and SNP-drug interactions and assessed the predictive power of these features on lipid responses. Using cross-validation stratified by patients, we could not validate our hypothesis that models that select a subset of SNP-drug interactions in a principled way have better predictive power than the control models using "random" subsets. Nevertheless, all models tested containing SNP and/or drug terms, exhibited significant predictive power (as compared to a random predictor) and explained a sizable proportion of variance, in the patient stratified cross-validation context. Importantly, the model containing stepwise selected SNP terms showed higher capacity to predict triglyceride levels than a model containing randomly selected SNPs. Dyslipidemia is a complex trait for which many factors remain to be discovered, thus missing from the data, and possibly explaining the limitations of our analysis. In particular, the interactions of drugs with SNPs selected from the set of candidate genes likely have small effect sizes which we were unable to detect in a sample of the present size (<800 patients).In the second part of my thesis, I performed genome-wide association studies within the Cohorte Lausannoise (CoLaus). I have been involved in several international projects to identify SNPs that are associated with various traits, such as serum calcium, body mass index, two-hour glucose levels, as well as metabolic syndrome and its components. These phenotypes are all related to major human health issues, such as cardiovascular disease. I applied statistical methods to detect new variants associated with these phenotypes, contributing to the identification of new genetic loci that may lead to new insights into the genetic basis of these traits. This kind of research will lead to a better understanding of the mechanisms underlying these pathologies, a better evaluation of disease risk, the identification of new therapeutic leads and may ultimately lead to the realization of "personalized" medicine.
Resumo:
OBJECTIVES: Many patients may believe that HIV screening is included in routine preoperative work-ups. We examined what proportion of patients undergoing preoperative blood testing believed that they had been tested for HIV. METHODS: All patients hospitalized for elective orthopaedic surgery between January and December 2007 were contacted and asked to participate in a 15-min computer-assisted telephone interview (n = 1330). The primary outcome was to determine which preoperative tests patients believed had been performed from a choice of glucose, clotting, HIV serology and cholesterol, and what percentage of patients interpreted the lack of result communication as a normal or negative test. The proportion of patients agreeable to HIV screening prior to future surgery was also determined. RESULTS: A total of 991 patients (75%) completed the questionnaire. Three hundred and seventy-five of these 991 patients (38%) believed incorrectly that they had been tested for HIV preoperatively. Younger patients were significantly more likely to believe that an HIV test had been performed (mean age 46 vs. 50 years for those who did not believe that an HIV test had been performed; P < 0.0001). Of the patients who believed that a test had been performed but received no result, 96% interpreted lack of a result as a negative HIV test. Over 80% of patients surveyed stated that they would agree to routine HIV screening prior to future surgery. A higher acceptance rate was associated with younger age (mean age 47 years for those who would agree vs. 56 years for those who would not; P < 0.0001) and male sex ( P < 0.009). CONCLUSIONS: Many patients believe that a preoperative blood test routinely screens for HIV. The incorrect assumption that a lack of result communication indicates a negative test may contribute to delays in HIV diagnoses.
Resumo:
OBJECTIVE: Critically ill patients are at high risk of malnutrition. Insufficient nutritional support still remains a widespread problem despite guidelines. The aim of this study was to measure the clinical impact of a two-step interdisciplinary quality nutrition program. DESIGN: Prospective interventional study over three periods (A, baseline; B and C, intervention periods). SETTING: Mixed intensive care unit within a university hospital. PATIENTS: Five hundred seventy-two patients (age 59 ± 17 yrs) requiring >72 hrs of intensive care unit treatment. INTERVENTION: Two-step quality program: 1) bottom-up implementation of feeding guideline; and 2) additional presence of an intensive care unit dietitian. The nutrition protocol was based on the European guidelines. MEASUREMENTS AND MAIN RESULTS: Anthropometric data, intensive care unit severity scores, energy delivery, and cumulated energy balance (daily, day 7, and discharge), feeding route (enteral, parenteral, combined, none-oral), length of intensive care unit and hospital stay, and mortality were collected. Altogether 5800 intensive care unit days were analyzed. Patients in period A were healthier with lower Simplified Acute Physiologic Scale and proportion of "rapidly fatal" McCabe scores. Energy delivery and balance increased gradually: impact was particularly marked on cumulated energy deficit on day 7 which improved from -5870 kcal to -3950 kcal (p < .001). Feeding technique changed significantly with progressive increase of days with nutrition therapy (A: 59% days, B: 69%, C: 71%, p < .001), use of enteral nutrition increased from A to B (stable in C), and days on combined and parenteral nutrition increased progressively. Oral energy intakes were low (mean: 385 kcal*day, 6 kcal*kg*day ). Hospital mortality increased with severity of condition in periods B and C. CONCLUSION: A bottom-up protocol improved nutritional support. The presence of the intensive care unit dietitian provided significant additional progression, which were related to early introduction and route of feeding, and which achieved overall better early energy balance.
Resumo:
Sex determination can be purely genetic (as in mammals and birds), purely environmental (as in many reptiles), or genetic but reversible by environmental factors during a sensitive period in life, as in many fish and amphibians (Wallace et al. 1999; Baroiller et al. 2009a; Stelkens & Wedekind 2010). Such environmental sex reversal (ESR) can be induced, for example, by temperature changes or by exposure to hormone-active substances. ESR has long been recognized as a means to produce more profitable single-sex cultures in fish farms (Cnaani & Levavi-Sivan 2009), but we know very little about its prevalence in the wild. Obviously, induced feminization or masculinization may immediately distort population sex ratios, and distorted sex ratios are indeed reported from some amphibian and fish populations (Olsen et al. 2006; Alho et al. 2008; Brykov et al. 2008). However, sex ratios can also be skewed by, for example, segregation distorters or sex-specific mortality. Demonstrating ESR in the wild therefore requires the identification of sex-linked genetic markers (in the absence of heteromorphic sex chromosomes) followed by comparison of genotypes and phenotypes, or experimental crosses with individuals who seem sex reversed, followed by sexing of offspring after rearing under non-ESR conditions and at low mortality. In this issue, Alho et al. (2010) investigate the role of ESR in the common frog (Rana temporaria) and a population that has a distorted adult sex ratio. They developed new sex-linked microsatellite markers and tested wild-caught male and female adults for potential mismatches between phenotype and genotype. They found a significant proportion of phenotypic males with a female genotype. This suggests environmental masculinization, here with a prevalence of 9%. The authors then tested whether XX males naturally reproduce with XX females. They collected egg clutches and found that some had indeed a primary sex ratio of 100% daughters. Other clutches seemed to result from multi-male fertilizations of which at least one male had the female genotype. These results suggest that sex-reversed individuals affect the sex ratio in the following generation. But how relevant is ESR if its prevalence is rather low, and what are the implications of successful reproduction of sex-reversed individuals in the wild?
Resumo:
Introduction: Within the framework of the «Programme cantonal Diabète», we aimed at collecting data to 1) describe the population of diabetic patients in the canton of Vaud, and 2) assess the quality of their care. Methods: A cross-sectional study was conducted in the fall of 2011. Out of 140 randomly selected community pharmacies registered in the canton of Vaud, 56 accepted to participate in patients' recruitment. Noninstitutionalized adult diabetic patients (disease duration >12 months) visiting a pharmacy with a prescription for oral anti-diabetic drugs, insulin, glycemic strips or glucose meter were eligible. Patients not residing in the canton of Vaud, not speaking and understanding French well enough, presenting obvious cognitive impairment, and women with gestational diabetes, were excluded. Using a self-administered questionnaire, data was collected on patients' characteristics and diabetes as well as various process (e.g. recommended annual screenings) and outcomes quality of care indicators. Descriptive analyses were performed. Results: A total of 406 patients with diabetes participated. Mean age was 64 years, 41% were women and 63% were married. Patients reported type 1, 2 and other types of diabetes in 13%, 69% and 19%, respectively. They were treated with oral anti-diabetic drugs, insulin or both in 50%, 23% and 27% of the cases. Half of the patients did not report any diabetes-related complication. Glucose self-monitoring was reported by 82% of the patients. Of those who were aware of HbA1C (n = 218), 98% reported at least one HbA1C control during the last 12 months. During that same time frame, 97% and 95% reported at least one blood pressure and weight measure, 94% reported having had a cholesterol check, 74%, 68% and 64% had eyes, feet and urine screening respectively. 62% of the patients had been immunized against influenza. At least 76% of the patients had a minimum of 5 of the 7 described process indicators performed during the last 12 months. Among patients who knew the value (n = 145), mean HbA1C was 7.4 (SD 1.2). Conclusion: This study targeting community-based diabetic patients shows that while routine clinical and laboratory tests were annually performed in the vast majority of patients, feet and urine screening, as well as influenza immunization, were less often reported by patients. The proportion of patients with diabetes having had at least 5 out of the 7 annual screenings performed was nevertheless very high.