79 resultados para Risk interval measure
Resumo:
Background: The identification of pre-clinical microvascular damage in hypertension by non-invasive techniques has proved frustrating for clinicians. This proof of concept study investigated whether entropy, a novel summary measure for characterizing blood velocity waveforms, is altered in participants with hypertension and may therefore be useful in risk stratification.
Methods: Doppler ultrasound waveforms were obtained from the carotid and retrobulbar circulation in 42 participants with uncomplicated grade 1 hypertension (mean systolic/diastolic blood pressure (BP) 142/92 mmHg), and 26 healthy controls (mean systolic/diastolic BP 116/69 mmHg). Mean wavelet entropy was derived from flow-velocity data and compared with traditional haemodynamic measures of microvascular function, namely the resistive and pulsatility indices.
Results: Entropy, was significantly higher in control participants in the central retinal artery (CRA) (differential mean 0.11 (standard error 0.05 cms(-1)), CI 0.009 to 0.219, p 0.017) and ophthalmic artery (0.12 (0.05), CI 0.004 to 0.215, p 0.04). In comparison, the resistive index (0.12 (0.05), CI 0.005 to 0.226, p 0.029) and pulsatility index (0.96 (0.38), CI 0.19 to 1.72, p 0.015) showed significant differences between groups in the CRA alone. Regression analysis indicated that entropy was significantly influenced by age and systolic blood pressure (r values 0.4-0.6). None of the measures were significantly altered in the larger conduit vessel.
Conclusion: This is the first application of entropy to human blood velocity waveform analysis and shows that this new technique has the ability to discriminate health from early hypertensive disease, thereby promoting the early identification of cardiovascular disease in a young hypertensive population.
Resumo:
Background: The Prenatal Distress Questionnaire (PDQ) is a short measure designed to assess specific worries and concerns related to pregnancy. The aim of this study was to confirm the factor structure of the PDQ in a group of pregnant women with a small for gestational age infant (< 10th centile). Methods: The first PDQ assessment for each of 337 pregnant women participating in the Prospective Observational Trial to Optimise paediatric health (PORTO) study was analysed. All women enrolled in the study were identified as having a small for gestational age foetus (< 10th centile), thus representing an 'elevated risk' group. Data were analysed using confirmatory factor analysis (CFA). Three models of the PDQ were evaluated and compared in the current study: a theoretical uni-dimensional measurement model, a bi-dimensional model, and a three-factor model solution. Results: The three-factor model offered the best fit to the data while maintaining sound theoretical grounds(χ2 (51df) = 128.52; CFI = 0.97; TLI = 0.96; RMSEA = 0.07). Factor 1 contained items reflecting concerns about birth and the baby, factor 2 concerns about physical symptoms and body image and factor 3 concerns about emotions and relationships. Conclusions: CFA confirmed that the three-factor model provided the best fit, with the items in each factor reflecting the findings of an earlier exploratory data analysis. © 2013 Society for Reproductive and Infant Psychology.
Resumo:
Background: Multidimensional rehabilitation programmes (MDRPs) have developed in response to the growing number of people living with and surviving cancer. MDRPs comprise a physical component and a psychosocial component. Studies of the effectiveness of these programmes have not been reviewed and synthesised.
Objectives: To conduct a systematic review of studies examining the effectiveness of MDRPs in terms of maintaining or improving the physical and psychosocial well-being of adult cancer survivors.
Search methods: We conducted electronic searches in the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, CINAHL and PsychINFO up to February 2012.
Selection criteria: Selection criteria focused on randomised controlled trials (RCTs) of multidimensional interventions for adult cancer survivors. Interventions had to include a physical component and a psychosocial component and to have been carried out on two or more occasions following completion of primary cancer treatment. Outcomes had to be assessed using validated measures of physical health and psychosocial well-being. Non-English language papers were included.
Data collection and analysis: Pairs of review authors independently selected trials, rated their methodological quality and extracted relevant data. Although meta-analyses of primary and secondary endpoints were planned there was a high level of study heterogeneity and only one common outcome measure (SF-36) could be statistically synthesised. In addition, we conducted a narrative analysis of interventions, particularly in terms of inspecting and identifying intervention components, grouping or categorising interventions and examining potential common links and outcomes.
Main results: Twelve RCTs (comprising 1669 participants) met the eligibility criteria. We judged five studies to have a moderate risk of bias and assessed the remaining seven as having a high risk of bias. It was possible to include SF-36 physical health component scores from five studies in a meta-analysis. Participating in a MDRP was associated with an increase in SF-36 physical health component scores (mean difference (MD) 2.22, 95% confidence interval (CI) 0.12 to 4.31, P = 0.04). The findings from the narrative analysis suggested that MDRPs with a single domain or outcome focus appeared to be more successful than programmes with multiple aims. In addition, programmes that comprised participants with different types of cancer compared to cancer site-specific programmes were more likely to show positive improvements in physical outcomes. The most effective mode of service delivery appeared to be face-to-face contact supplemented with at least one follow-up telephone call. There was no evidence to indicate that MDRPs which lasted longer than six months improved outcomes beyond the level attained at six months. In addition, there was no evidence to suggest that services were more effective if they were delivered by a particular type of health professional.
Authors' conclusions: There is some evidence to support the effectiveness of brief, focused MDRPs for cancer survivors. Rigorous and methodologically sound clinical trials that include an economic analysis are required.
Resumo:
Introduction: Individuals carrying pathogenic mutations in the BRCA1 and BRCA2 genes have a high lifetime risk of breast cancer. BRCA1 and BRCA2 are involved in DNA double-strand break repair, DNA alterations that can be caused by exposure to reactive oxygen species, a main source of which are mitochondria. Mitochondrial genome variations affect electron transport chain efficiency and reactive oxygen species production. Individuals with different mitochondrial haplogroups differ in their metabolism and sensitivity to oxidative stress. Variability in mitochondrial genetic background can alter reactive oxygen species production, leading to cancer risk. In the present study, we tested the hypothesis that mitochondrial haplogroups modify breast cancer risk in BRCA1/2 mutation carriers.
Methods: We genotyped 22,214 (11,421 affected, 10,793 unaffected) mutation carriers belonging to the Consortium of Investigators of Modifiers of BRCA1/2 for 129 mitochondrial polymorphisms using the iCOGS array. Haplogroup inference and association detection were performed using a phylogenetic approach. ALTree was applied to explore the reference mitochondrial evolutionary tree and detect subclades enriched in affected or unaffected individuals.
Results: We discovered that subclade T1a1 was depleted in affected BRCA2 mutation carriers compared with the rest of clade T (hazard ratio (HR) = 0.55; 95% confidence interval (CI), 0.34 to 0.88; P = 0.01). Compared with the most frequent haplogroup in the general population (that is, H and T clades), the T1a1 haplogroup has a HR of 0.62 (95% CI, 0.40 to 0.95; P = 0.03). We also identified three potential susceptibility loci, including G13708A/rs28359178, which has demonstrated an inverse association with familial breast cancer risk.
Conclusions: This study illustrates how original approaches such as the phylogeny-based method we used can empower classical molecular epidemiological studies aimed at identifying association or risk modification effects.
Resumo:
Introduction: It has been suggested that doctors in their first year of post-graduate training make a disproportionate number of prescribing errors.
Obkective: This study aimed to compare the prevalence of prescribing errors made by first-year post-graduate doctors with that of errors by senior doctors and non-medical prescribers and to investigate the predictors of potentially serious prescribing errors.
Methods: Pharmacists in 20 hospitals over 7 prospectively selected days collected data on the number of medication orders checked, the grade of prescriber and details of any prescribing errors. Logistic regression models (adjusted for clustering by hospital) identified factors predicting the likelihood of prescribing erroneously and the severity of prescribing errors.
Results: Pharmacists reviewed 26,019 patients and 124,260 medication orders; 11,235 prescribing errors were detected in 10,986 orders. The mean error rate was 8.8 % (95 % confidence interval [CI] 8.6-9.1) errors per 100 medication orders. Rates of errors for all doctors in training were significantly higher than rates for medical consultants. Doctors who were 1 year (odds ratio [OR] 2.13; 95 % CI 1.80-2.52) or 2 years in training (OR 2.23; 95 % CI 1.89-2.65) were more than twice as likely to prescribe erroneously. Prescribing errors were 70 % (OR 1.70; 95 % CI 1.61-1.80) more likely to occur at the time of hospital admission than when medication orders were issued during the hospital stay. No significant differences in severity of error were observed between grades of prescriber. Potentially serious errors were more likely to be associated with prescriptions for parenteral administration, especially for cardiovascular or endocrine disorders.
Conclusions: The problem of prescribing errors in hospitals is substantial and not solely a problem of the most junior medical prescribers, particularly for those errors most likely to cause significant patient harm. Interventions are needed to target these high-risk errors by all grades of staff and hence improve patient safety.
Resumo:
Background The use of technology in healthcare settings is on the increase and may represent a cost-effective means of delivering rehabilitation. Reductions in treatment time, and delivery in the home, are also thought to be benefits of this approach. Children and adolescents with brain injury often experience deficits in memory and executive functioning that can negatively affect their school work, social lives, and future occupations. Effective interventions that can be delivered at home, without the need for high-cost clinical involvement, could provide a means to address a current lack of provision. We have systematically reviewed studies examining the effects of technology-based interventions for the rehabilitation of deficits in memory and executive functioning in children and adolescents with acquired brain injury. Objectives To assess the effects of technology-based interventions compared to placebo intervention, no treatment, or other types of intervention, on the executive functioning and memory of children and adolescents with acquired brain injury. Search methods We ran the search on the 30 September 2015. We searched the Cochrane Injuries Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R), EMBASE Classic + EMBASE (OvidSP), ISI Web of Science (SCI-EXPANDED, SSCI, CPCI-S, and CPSI-SSH), CINAHL Plus (EBSCO), two other databases, and clinical trials registers. We also searched the internet, screened reference lists, and contacted authors of included studies. Selection criteria Randomised controlled trials comparing the use of a technological aid for the rehabilitation of children and adolescents with memory or executive-functioning deficits with placebo, no treatment, or another intervention. Data collection and analysis Two review authors independently reviewed titles and abstracts identified by the search strategy. Following retrieval of full-text manuscripts, two review authors independently performed data extraction and assessed the risk of bias. Main results Four studies (involving 206 participants) met the inclusion criteria for this review. Three studies, involving 194 participants, assessed the effects of online interventions to target executive functioning (that is monitoring and changing behaviour, problem solving, planning, etc.). These studies, which were all conducted by the same research team, compared online interventions against a 'placebo' (participants were given internet resources on brain injury). The interventions were delivered in the family home with additional support or training, or both, from a psychologist or doctoral student. The fourth study investigated the use of a computer program to target memory in addition to components of executive functioning (that is attention, organisation, and problem solving). No information on the study setting was provided, however a speech-language pathologist, teacher, or occupational therapist accompanied participants. Two studies assessed adolescents and young adults with mild to severe traumatic brain injury (TBI), while the remaining two studies assessed children and adolescents with moderate to severe TBI. Risk of bias We assessed the risk of selection bias as low for three studies and unclear for one study. Allocation bias was high in two studies, unclear in one study, and low in one study. Only one study (n = 120) was able to conceal allocation from participants, therefore overall selection bias was assessed as high. One study took steps to conceal assessors from allocation (low risk of detection bias), while the other three did not do so (high risk of detection bias). Primary outcome 1: Executive functioning: Technology-based intervention versus placebo Results from meta-analysis of three studies (n = 194) comparing online interventions with a placebo for children and adolescents with TBI, favoured the intervention immediately post-treatment (standardised mean difference (SMD) -0.37, 95% confidence interval (CI) -0.66 to -0.09; P = 0.62; I2 = 0%). (As there is no 'gold standard' measure in the field, we have not translated the SMD back to any particular scale.) This result is thought to represent only a small to medium effect size (using Cohen’s rule of thumb, where 0.2 is a small effect, 0.5 a medium one, and 0.8 or above is a large effect); this is unlikely to have a clinically important effect on the participant. The fourth study (n = 12) reported differences between the intervention and control groups on problem solving (an important component of executive functioning). No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. The quality of evidence for this outcome according to GRADE was very low. This means future research is highly likely to change the estimate of effect. Primary outcome 2: Memory One small study (n = 12) reported a statistically significant difference in improvement in sentence recall between the intervention and control group following an eight-week remediation programme. No means or standard deviations were presented for this outcome, therefore an effect size could not be calculated. Secondary outcomes Two studies (n = 158) reported on anxiety/depression as measured by the Child Behavior Checklist (CBCL) and were included in a meta-analysis. We found no evidence of an effect with the intervention (mean difference -5.59, 95% CI -11.46 to 0.28; I2 = 53%). The GRADE quality of evidence for this outcome was very low, meaning future research is likely to change the estimate of effect. A single study sought to record adverse events and reported none. Two studies reported on use of the intervention (range 0 to 13 and 1 to 24 sessions). One study reported on social functioning/social competence and found no effect. The included studies reported no data for other secondary outcomes (that is quality of life and academic achievement). Authors' conclusions This review provides low-quality evidence for the use of technology-based interventions in the rehabilitation of executive functions and memory for children and adolescents with TBI. As all of the included studies contained relatively small numbers of participants (12 to 120), our findings should be interpreted with caution. The involvement of a clinician or therapist, rather than use of the technology, may have led to the success of these interventions. Future research should seek to replicate these findings with larger samples, in other regions, using ecologically valid outcome measures, and reduced clinician involvement.
Resumo:
In recent years much attention has been given to systemic risk and maintaining financial stability. Much of the focus, rightly, has been on market failures and the role of regulation in addressing them. This article looks at the role of domestic policies and government actions as sources of global instability. The global financial system is built upon global markets controlled by national financial and macroeconomic policies. In this context, regulatory asymmetries, diverging policy preferences, and government failures add a further dimension to global systemic risk not present at the national level.
Systemic risk is a result of the interplay between two independent variables: an underlying trigger event, in this analysis a domestic policy measure, and a transmission channel. The solution to systemic risk requires tackling one of these variables. In a domestic setting, the centralization of regulatory power into one single authority makes it easier to balance the delicate equilibrium between enhancing efficiency and reducing instability. However, in a global financial system in which national financial policies serve to maximize economic welfare, regulators will be confronted with difficult policy and legal tradeoffs.
We investigate the role that financial regulation plays in addressing domestic policy failures and in controlling the danger of global financial interdependence. To do so we analyse global financial interconnectedness, and explain its role in transmitting instability; we investigate the political economy dynamics at the origin of regulatory asymmetries and government failures; and we discuss the limits of regulation.
Resumo:
AIM: To evaluate the association between various lifestyle factors and achalasia risk.
METHODS: A population-based case-control study was conducted in Northern Ireland, including n= 151 achalasia cases and n = 117 age- and sex-matched controls. Lifestyle factors were assessed via a face-to-face structured interview. The association between achalasia and lifestyle factors was assessed by unconditional logistic regression, to produce odds ratios (OR) and 95% confidence interval (CI).
RESULTS: Individuals who had low-class occupations were at the highest risk of achalasia (OR = 1.88, 95%CI: 1.02-3.45), inferring that high-class occupation holders have a reduced risk of achalasia. A history of foreign travel, a lifestyle factor linked to upper socio-economic class, was also associated with a reduced risk of achalasia (OR = 0.59, 95%CI: 0.35-0.99). Smoking and alcohol consumption carried significantly reduced risks of achalasia, even after adjustment for socio-economic status. The presence of pets in the house was associated with a two-fold increased risk of achalasia (OR = 2.00, 95%CI: 1.17-3.42). No childhood household factors were associated with achalasia risk.
CONCLUSION: Achalasia is a disease of inequality, and individuals from low socio-economic backgrounds are at highest risk. This does not appear to be due to corresponding alcohol and smoking behaviours. An observed positive association between pet ownership and achalasia risk suggests an interaction between endotoxin and viral infection exposure in achalasia aetiology.
Resumo:
OBJECTIVES: To improve understanding about the potential underlying biological mechanisms in the link between depression and all-cause mortality and to investigate the role that inflammatory and other cardiovascular risk factors may play in the relationship between depressive symptoms and mortality.
METHODS: Depression and blood-based biological markers were assessed in the Belfast PRIME prospective cohort study (N = 2389 men, aged 50-59 years) in which participants were followed up for 18 years. Depression was measured using the 10-item Welsh Pure Depression Inventory. Inflammation markers (C-reactive protein [CRP], neopterin, interleukin [IL]-1 receptor antagonist [IL-1Ra], and IL-18) and cardiovascular-specific risk factors (N-terminal pro-b-type natriuretic peptide, midregion pro-atrial natriuretic peptide, midregion pro-adrenomedullin, C-terminal pro-endothelin-1 [CT-proET]) were obtained at baseline. We used Cox proportional hazards modeling to examine the association between depression and biological measures in relation to all-cause mortality and explore the mediating effects.
RESULTS: During follow-up, 418 participants died. Higher levels of depressive symptoms were associated with higher levels of CRP, IL-1Ra, and CT-proET. After adjustment for socioeconomic and life-style risk factors, depressive symptoms were significantly associated with all-cause mortality (hazard ratio = 1.10 per scale unit, 95% confidence interval = 1.04-1.16). This association was partly explained by CRP (7.3%) suggesting a minimal mediation effect. IL-1Ra, N-terminal pro-b-type natriuretic peptide, midregion pro-atrial natriuretic peptide, midregion pro-adrenomedullin, and CT-proET contributed marginally to the association between depression and subsequent mortality.
CONCLUSIONS: Inflammatory and cardiovascular risk markers are associated with depression and with increased mortality. However, depression and biological measures show additive effects rather than a pattern of meditation of biological factors in the association between depression and mortality.
Resumo:
PURPOSE: Disordered sleep and myopia are increasingly prevalent among Chinese children. Similar pathways may be involved in regulation of both sleep cycles and eye growth. We therefore sought to examine the association between disordered sleep and myopia in this group. METHODS: Urban primary school children participating in a clinical trial on myopia and outdoor activity underwent automated cycloplegic refraction with subjective refinement. Parents answered questions about children's sleep duration, sleep disorders (Children's Sleep Habits Questionnaire [CSHQ]), near work and time spent outdoors. RESULTS: Among 1970 children, 1902 (96.5%, mean [standard deviation SD] age 9.80 [0.44] years, 53.1% boys) completed refraction and questionnaires. Myopia < = -0.50 Diopters was present in both eyes of 588 (30.9%) children (1329/3804 = 34.9% of eyes) and 1129 children (59.4%) had abnormal CSHQ scores (> 41). In logistic regression models by eye, odds of myopia < = -0.50D increased with worse CSHQ score (Odds Ratio [OR] 1.01 per point, 95% Confidence Interval [CI] [1.001, 1.02], P = 0.014) and more night-time sleep (OR 1.02, 95% CI [1.01, 1.04, P = 0.002], while male sex (OR 0.82, 95% CI [0.70, 0.95], P = 0.008) and time outdoors (OR = 0.97, 95% CI [0.95, 0.99], P = 0.011) were associated with less myopia. The association between sleep duration and myopia was not significant (p = 0.199) for total (night + midday) sleep. CONCLUSIONS: Myopia and disordered sleep were both common in this cohort, but we did not find consistent evidence for an association between the two. TRIAL REGISTRATION: clinicaltrials.gov NCT00848900.
Resumo:
Background/Purpose:Juvenile idiopathic arthritis (JIA) comprises a poorly understood group of chronic, childhood onset, autoimmune diseases with variable clinical outcomes. We investigated whether profiling of the synovial fluid (SF) proteome by a fluorescent dye based, two-dimensional gel (DIGE) approach could distinguish the subset of patients in whom inflammation extends to affect a large number of joints, early in the disease process. The post-translational modifications to candidate protein markers were verified by a novel deglycosylation strategy.Methods:SF samples from 57 patients were obtained around time of initial diagnosis of JIA. At 1 year from inclusion patients were categorized according to ILAR criteria as oligoarticular arthritis (n=26), extended oligoarticular (n=8) and polyarticular disease (n=18). SF samples were labeled with Cy dyes and separated by two-dimensional electrophoresis. Multivariate analyses were used to isolate a panel of proteins which distinguish patient subgroups. Proteins were identified using MALDI-TOF mass spectrometry with vitamin D binding protein (VDBP) expression and siaylation further verified by immunohistochemistry, ELISA test and immunoprecipitation. Candidate biomarkers were compared to conventional inflammation measure C-reactive protein (CRP). Sialic acid residues were enzymatically cleaved from immunopurified SF VDBP, enriched by hydrophilic interaction liquid chromatography (HILIC) and analysed by mass spectrometry.Results:Hierarchical clustering based on the expression levels of a set of 23 proteins segregated the extended-to-be oligoarticular from the oligoarticular patients. A cleaved isoform of VDBP, spot 873, is present at significantly reduced levels in the SF of oligoarticular patients at risk of disease extension, relative to other subgroups (p<0.05). Conversely total levels of vitamin D binding protein are elevated in plasma and ROC curves indicate an improved diagnostic sensitivity to detect patients at risk of disease extension, over both spot 873 and CRP levels. Sialysed forms of intact immunopurified VDBP were more prevalent in persistent oligoarticular patient synovial fluids.Conclusion:The data indicate that a subset of the synovial fluid proteome may be used to stratify patients to determine risk of disease extension. Reduced conversion of VDBP to a macrophage activation factor may represent a novel pathway contributing to increased risk of disease extension in JIA patients.
Resumo:
Purpose: To investigate how potentially functional genetic variants are coinherited on each of four common complement factor H (CFH) and CFH-related gene haplotypes and to measure expression of these genes in eye and liver tissues.
Methods: We sequenced the CFH region in four individuals (one homozygote for each of four common CFH region haplotypes) to identify all genetic variants. We studied associations between the haplotypes and AMD phenotypes in 2157 cases and 1150 controls. We examined RNA-seq profiles in macular and peripheral retina and retinal pigment epithelium/choroid/sclera (RCS) from eight eye donors and three liver samples.
Results: The haplotypic coinheritance of potentially functional variants (including missense variants, novel splice sites, and the CFHR3–CFHR1 deletion) was described for the four common haplotypes. Expression of the short and long CFH transcripts differed markedly between the retina and liver. We found no expression of any of the five CFH-related genes in the retina or RCS, in contrast to the liver, which is the main source of the circulating proteins.
Conclusions: We identified all genetic variants on common CFH region haplotypes and described their coinheritance. Understanding their functional effects will be key to developing and stratifying AMD therapies. The small scale of our expression study prevented us from investigating the relationships between CFH region haplotypes and their expression, and it will take time and collaboration to develop epidemiologic-scale studies. However, the striking difference between systemic and ocular expression of complement regulators shown in this study suggests important implications for the development of intraocular and systemic treatments.
Resumo:
Food preparation and storage behaviors in the home deviating from the ‘best practice’ food safety recommendations may result in food borne illnesses. Currently, there are limited tools available to fully evaluate the consumer knowledge, perceptions and behavior in the area of refrigerator safety. The current study aimed to develop a valid and reliable tool in the form of a questionnaire (CFSQCRSQ) for assessing systematically all these aspects. Items relating to refrigerator safety knowledge (n=17), perceptions (n=46), reported behavior (n=30) were developed and pilot tested by an expert reference group and various consumer groups to assess face and content validity (n=20), item difficulty and item consistency (n=55) and construct validity (n=23). The findings showed that the CFSQCRSQ has acceptable face and content validity with acceptable levels of item difficulty. Item consistency was observed for 12 out of 15 refrigerator safety knowledge. Further, all five of the subscales of consumer perceptions of refrigerator safety practices relating to risk of developing foodborne disease food poisoning showed acceptable internal consistency (Cronbach’s α value > 0.8). Construct validity of the CFSQCRSQ was shown to be very good (p=0.022). The CFSQCRSQ exhibited acceptable test-retest reliability at 14 days with majority of knowledge items (93.3%) and reported behavior items (96.4%) having correlation coefficients of greater than 0.70. Overall, the CFSQCRSQ was deemed valid and reliable in assessing refrigerator safety knowledge and behavior and therefore has the potential for future use in identifying groups of individuals at increased risk of deviating from recommended refrigerator safety practices as well as the assessment of refrigerator safety knowledge, behavior for use before and after an intervention.
Resumo:
CONTEXT: In observational studies low serum 25-hydroxyvitamin D (25-OHD) concentration is associated with an increased risk of type 2 diabetes mellitus (DM). Increasing serum 25-OHD may have beneficial effects on insulin resistance or beta-cell function. Cross-sectional studies utilising sub-optimal methods for assessment of insulin sensitivity and serum 25-OHD concentration provide conflicting results.
OBJECTIVE: This study examined the relationship between serum 25-OHD concentration and insulin resistance in healthy overweight individuals at increased risk of cardiovascular disease, using optimal assessment techniques.
METHODS: 92 subjects (mean age 56.0, SD 6.0 years), who were healthy but overweight (mean BMI 30.9, SD 2.3 kg/m(2) ) underwent assessments of insulin sensitivity (two-step euglycaemic hyperinsulinaemic clamp, HOMA2-IR), beta-cell function (HOMA2%B), serum 25-OHD concentration and body composition (DEXA).
RESULTS: Mean total 25-OHD concentration was 32.2, range 21.8 - 46.6 nmol/L. No association was demonstrated between serum 25-OHD concentration and insulin resistance.
CONCLUSIONS: In this study using optimal assessment techniques to measure 25-OHD concentration, insulin sensitivity and body composition, there was no association between serum 25-OHD concentration and insulin resistance in healthy, overweight individuals at high risk of developing cardiovascular disease. This study suggests the documented inverse association between serum 25-OHD concentration and risk of type 2 DM is not mediated by a relationship between serum 25-OHD concentration and insulin resistance.
Resumo:
The role of antiplatelet therapy as primary prophylaxis of thrombosis in low-risk essential thrombocythemia has not been studied in randomized clinical trials. We assessed the benefit/risk of low-dose aspirin in 433 low-risk essential thrombocythemia patients (CALR-mutated n=271, JAK2V617F-mutated n=162) who were on antiplatelet therapy or observation only. After a 2215 person-years follow-up free from cytoreduction, 25 thrombotic and 17 bleeding episodes were recorded. In CALR-mutated patients, antiplatelet therapy did not affect the risk of thrombosis but was associated with a higher incidence of bleeding (12.9 vs. 1.8 x1000 patient-years, p=0.03). In JAK2V617F-mutated patients, low-dose aspirin was associated with a reduced incidence of venous thrombosis with no effect on the risk of bleeding. Coexistence of JAK2V617F-mutation and cardiovascular risk factors increased the risk of thrombosis, even after adjusting for treatment with low-dose aspirin (incidence rate ratio: 9.8; 95% confidence interval: 2.3-42.3; p=0.02). Time free from cytoreduction was significantly shorter in CALR-mutated than in JAK2V617F-mutated essential thrombocythemia (median time 5 years and 9.8 years, respectively; p=0.0002) usually to control extreme thrombocytosis. In conclusion, in patients with low-risk, CALR-mutated essential thrombocythemia, low-dose aspirin does not reduce the risk of thrombosis and may increase the risk of bleeding.