788 resultados para Difference-in-differences
Resumo:
A long-running issue in appetite research concerns the influence of energy expenditure on energy intake. More than 50 years ago, Otto G. Edholm proposed that "the differences between the intakes of food [of individuals] must originate in differences in the expenditure of energy". However, a relationship between energy expenditure and energy intake within any one day could not be found, although there was a correlation over 2 weeks. This issue was never resolved before interest in integrative biology was replaced by molecular biochemistry. Using a psychobiological approach, we have studied appetite control in an energy balance framework using a multi-level experimental system on a single cohort of overweight and obese human subjects. This has disclosed relationships between variables in the domains of body composition [fat-free mass (FFM), fat mass (FM)], metabolism, gastrointestinal hormones, hunger and energy intake. In this Commentary, we review our own and other data, and discuss a new formulation whereby appetite control and energy intake are regulated by energy expenditure. Specifically, we propose that FFM (the largest contributor to resting metabolic rate), but not body mass index or FM, is closely associated with self-determined meal size and daily energy intake. This formulation has implications for understanding weight regulation and the management of obesity.
Resumo:
Video-based training combined with flotation tank recovery may provide an additional stimulus for improving shooting in basketball. A pre-post controlled trial was conducted to assess the effectiveness of a 3 wk intervention combining video-based training and flotation tank recovery on three-point shooting performance in elite female basketball players. Players were assigned to an experimental (n=10) and control group (n=9). A 3 wk intervention consisted of 2 x 30 min float sessions a week which included 10 min of video-based training footage, followed by a 3 wk retention phase. A total of 100 three-point shots were taken from 5 designated positions on the court at each week to assess three-point shooting performance. There was no clear difference in the mean change in the number of successful three-point shots between the groups (-3%; ±18%, mean; ±90% confidence limits). Video-based training combined with flotation recovery had little effect on three-point shooting performance.
Resumo:
AIMS: To investigate the prevalence, histopathological and histomorphometric presentation of chronic laminitis in a population of Kaimanawa feral horses. METHODS: Following the capture and euthanasia of feral horses from the Kaimanawa Ranges of New Zealand, the left forefoot of 28 stallions and 28 mares aged between 6 and 12 years were removed and processed for histology. Sections of lamellar samples from each horse were examined using light microscopy. The presence of laminitis was assessed and the histopathological lesions were described. Horses were grouped by histological diagnosis into laminitic and non-laminitic groups and histomorphometric analysis was conducted and compared between groups. The parameters examined were total length of primary epidermal lamellae (PEL), keratinised length of PEL, and the length of secondary epidermal lamellae (SEL) at the abaxial end and axial end of each PEL. RESULTS: Of the horses examined, 25 (45%) were diagnosed with chronic laminitis. The most prevalent histopathological features were the presence of excessive cap horn, and multi-branched and attenuated SEL. Histomorphometric assessment of the lamellar architecture revealed no difference in morphometric measurements between the normal and laminitic groups for any parameter measured (p>0.05). CONCLUSIONS: The current study found a high prevalence of laminitis in feral Kaimanawa horses. The reason for this in the Kaimanawa population is not known. Histomorphometric analysis may not be a good indicator of chronic laminitis in feral horses. CLINICAL RELEVANCE: It is an important finding that the feral horse lifestyle in the environment of the Kaimanawa Ranges in New Zealand offers no protection against foot disease. The finding suggests that horses are vulnerable to laminitis whether in domestic care or in a feral habitat.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
Language has been of interest to numerous economists since the late 20th century, with the majority of the studies focusing on its effects on immigrants’ labour market outcomes; earnings in particular. However, language is an endogenous variable, which along with its susceptibility to measurement error causes biases in ordinary-least-squares estimates. The instrumental variables method overcomes the shortcomings of ordinary least squares in modelling endogenous explanatory variables. In this dissertation, age at arrival combined with country of origin form an instrument creating a difference-in-difference scenario, to address the issue of endogeneity and attenuation error in language proficiency. The first half of the study aims to investigate the extent to which English speaking ability of immigrants improves their labour market outcomes and social assimilation in Australia, with the use of the 2006 Census. The findings have provided evidence that support the earlier studies. As expected, immigrants in Australia with better language proficiency are able to earn higher income, attain higher level of education, have higher probability of completing tertiary studies, and have more hours of work per week. Language proficiency also improves social integration, leading to higher probability of marriage to a native and higher probability of obtaining citizenship. The second half of the study further investigates whether language proficiency has similar effects on a migrant’s physical and mental wellbeing, health care access and lifestyle choices, with the use of three National Health Surveys. However, only limited evidence has been found with respect to the hypothesised causal relationship between language and health for Australian immigrants.
Resumo:
Purpose – The question of whether female-owned firms underperform male-owned firms has triggered much research and discussion. Klapper and Parker's review concluded that the majority of prior research suggests that female-owned firms underperform relative to male-owned firms. However, using performance measures that control for size and risk (and after controlling for demographic differences such as industry, experience and hours worked) Robb and Watson found no gender performance difference in their sample of newly established US firms. The aim of this study, therefore, is to replicate Robb and Watson's study to determine whether their findings can be generalized to another geographical location, Australia. Design/methodology/approach – The authors test the female underperformance hypothesis using data from the CAUSEE project, a panel study which follows young firms over four years. They use three outcome variables: survival rates, return on assets and the Sharpe ratio. Findings – Consistent with Robb and Watson the results indicate that female-owned firms do not underperform male-owned firms. Originality/value – While replication studies are rare in entrepreneurship, they are an important tool for accumulating generalizable knowledge. The results suggest that while female-owned firms differ from male-owned firms in terms of many control variables (such as industry, owners' previous experience and hours worked) they are no less successful. This outcome should help dispel the female underperformance myth; which if left unchallenged could result in inappropriate policy decisions and, more importantly, could discourage women from establishing new ventures.
Resumo:
Human immunodeficiency virus (HIV) that leads to acquired immune deficiency syndrome (AIDs) reduces immune function, resulting in opportunistic infections and later death. Use of antiretroviral therapy (ART) increases chances of survival, however, with some concerns regarding fat re-distribution (lipodystrophy) which may encompass subcutaneous fat loss (lipoatrophy) and/or fat accumulation (lipohypertrophy), in the same individual. This problem has been linked to Antiretroviral drugs (ARVs), majorly, in the class of protease inhibitors (PIs), in addition to older age and being female. An additional concern is that the problem exists together with the metabolic syndrome, even when nutritional status/ body composition, and lipodystrophy/metabolic syndrome are unclear in Uganda where the use of ARVs is on the increase. In line with the literature, the overall aim of the study was to assess physical characteristics of HIV-infected patients using a comprehensive anthropometric protocol and to predict body composition based on these measurements and other standardised techniques. The other aim was to establish the existence of lipodystrophy, the metabolic syndrome, andassociated risk factors. Thus, three studies were conducted on 211 (88 ART-naïve) HIV-infected, 15-49 year-old women, using a cross-sectional approach, together with a qualitative study of secondary information on patient HIV and medication status. In addition, face-to-face interviews were used to extract information concerning morphological experiences and life style. The study revealed that participants were on average 34.1±7.65 years old, had lived 4.63±4.78 years with HIV infection and had spent 2.8±1.9 years receiving ARVs. Only 8.1% of participants were receiving PIs and 26% of those receiving ART had ever changed drug regimen, 15.5% of whom changed drugs due to lipodystrophy. Study 1 hypothesised that the mean nutritional status and predicted percent body fat values of study participants was within acceptable ranges; different for participants receiving ARVs and the HIV-infected ART-naïve participants and that percent body fat estimated by anthropometric measures (BMI and skinfold thickness) and the BIA technique was not different from that predicted by the deuterium oxide dilution technique. Using the Body Mass Index (BMI), 7.1% of patients were underweight (<18.5 kg/m2) and 46.4% were overweight/obese (≥25.0 kg/m2). Based on waist circumference (WC), approximately 40% of the cohort was characterized as centrally obese. Moreover, the deuterium dilution technique showed that there was no between-group difference in the total body water (TBW), fat mass (FM) and fat-free mass (FFM). However, the technique was the only approach to predict a between-group difference in percent body fat (p = .045), but, with a very small effect (0.021). Older age (β = 0.430, se = 0.089, p = .000), time spent receiving ARVs (β = 0.972, se = 0.089, p = .006), time with the infection (β = 0.551, se = 0.089, p = .000) and receiving ARVs (β = 2.940, se = 1.441, p = .043) were independently associated with percent body fat. Older age was the greatest single predictor of body fat. Furthermore, BMI gave better information than weight alone could; in that, mean percentage body fat per unit BMI (N = 192) was significantly higher in patients receiving treatment (1.11±0.31) vs. the exposed group (0.99±0.38, p = .025). For the assessment of obesity, percent fat measures did not greatly alter the accuracy of BMI as a measure for classifying individuals into the broad categories of underweight, normal and overweight. Briefly, Study 1 revealed that there were more overweight/obese participants than in the general Ugandan population, the problem was associated with ART status and that BMI broader classification categories were maintained when compared with the gold standard technique. Study 2 hypothesized that the presence of lipodystrophy in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants. Results showed that 112 (53.1%) patients had experienced at least one morphological alteration including lipohypertrophy (7.6%), lipoatrophy (10.9%), and mixed alterations (34.6%). The majority of these subjects (90%) were receiving ARVs; in fact, all patients receiving PIs reported lipodystrophy. Period spent receiving ARVs (t209 = 6.739, p = .000), being on ART (χ2 = 94.482, p = .000), receiving PIs (Fisher’s exact χ2 = 113.591, p = .000), recent T4 count (CD4 counts) (t207 = 3.694, p = .000), time with HIV (t125 = 1.915, p = .045), as well as older age (t209 = 2.013, p = .045) were independently associated with lipodystrophy. Receiving ARVs was the greatest predictor of lipodystrophy (p = .000). In other analysis, aside from skinfolds at the subscapular (p = .004), there were no differences with the rest of the skinfold sites and the circumferences between participants with lipodystrophy and those without the problem. Similarly, there was no difference in Waist: Hip ratio (WHR) (p = .186) and Waist: Height ratio (WHtR) (p = .257) among participants with lipodystrophy and those without the problem. Further examination showed that none of the 4.1% patients receiving stavudine (d4T) did experience lipoatrophy. However, 17.9% of patients receiving EFV, a non-nucleoside reverse transcriptase inhibitor (NNRTI) had lipoatrophy. Study 2 findings showed that presence of lipodystrophy in participants receiving ARVs was in fact far higher than that of HIV-infected ART-naïve participants. A final hypothesis was that the prevalence of the metabolic syndrome in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants. Moreover, data showed that many patients (69.2%) lived with at least one feature of the metabolic syndrome based on International Diabetic Federation (IDF, 2006) definition. However, there was no single anthropometric predictor of components of the syndrome, thus, the best anthropometric predictor varied as the component varied. The metabolic syndrome was diagnosed in 15.2% of the subjects, lower than commonly reported in this population, and was similar between the medicated and the exposed groups (χ 21 = 0.018, p = .893). Moreover, the syndrome was associated with older age (p = .031) and percent body fat (p = .012). In addition, participants with the syndrome were heavier according to BMI (p = .000), larger at the waist (p = .000) and abdomen (p = .000), and were at central obesity risk even when hip circumference (p = .000) and height (p = .000) were accounted for. In spite of those associations, results showed that the period with disease (p = .13), CD4 counts (p = .836), receiving ART (p = .442) or PIs (p = .678) were not associated with the metabolic syndrome. While the prevalence of the syndrome was highest amongst the older, larger and fatter participants, WC was the best predictor of the metabolic syndrome (p = .001). Another novel finding was that participants with the metabolic syndrome had greater arm muscle circumference (AMC) (p = .000) and arm muscle area (AMA) (p = .000), but the former was most influential. Accordingly, the easiest and cheapest indicator to assess risk in this study sample was WC should routine laboratory services not be feasible. In addition, the final study illustrated that the prevalence of the metabolic syndrome in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants.
Resumo:
Madeira vine (Anredera cordifolia (Ten.) Steenis) is a climber in the angiosperm family Basellaceae. It is native to South America and has naturalised in Australia. It is regarded as a serious environmental weed because of the structural damage it causes to native vegetation. The present study, for the first time, documents anatomical and morphological traits of the leaves of A. cordifolia and considers their implications for its ecology and physiology. Plants were grown under three different light levels, and anatomical and morphological leaf characters were compared among light levels, among cohorts, and with documented traits of the related species, Basella alba L. Stomata were present on both the adaxial and abaxial sides of the leaf, with significantly more stomata on the abaxial side and under high light. This may account for the ability of this species to fix large amounts of carbon and rapidly respond to light gaps. The leaves had very narrow veins and no sclerenchyma, suggesting a low construction cost that is associated with invasive plants. There was no significant difference in any of the traits among different cohorts, which agrees with the claim that A. cordifolia primarily propagates vegetatively. The anatomy and morphology of A. cordifolia was similar to that of B. alba.
Resumo:
This paper was designed to study metabonomic characters of the hepatotoxicity induced by alcohol and the intervention effects of Yin Chen Hao Tang (YCHT), a classic traditional Chinese medicine formula for treatment of jaundice and liver disorders in China. Urinary samples from control, alcohol- and YCHT-treated rats were analyzed by ultra-performance liquid chromatography/electrospray ionization quadruple time-of-flight mass spectrometry (UPLC/ESI-QTOF-MS) in positive ionization mode. The total ion chromatograms obtained from the control, alcohol- and YCHT-treated rats were easily distinguishable using a multivariate statistical analysis method such as the principal components analysis (PCA). The greatest difference in metabolic profiling was observed from alcohol-treated rats compared with the control and YCHT-treated rats. The positive ions m/z 664.3126 (9.00 min) was elevated in urine of alcohol-treated rats, whereas, ions m/z 155.3547 (10.96 min) and 708.2932 (9.01 min) were at a lower concentration compared with that in urine of control rats, however, these ions did not indicate a statistical difference between control rats and YCHT-treated rats. The ion m/z 664.3126 was found to correspond to ceramide (d18:1/25:0), providing further support for an involvement of the sphingomyelin signaling pathway in alcohol hepatotoxicity and the intervention effects of YCHT.
Resumo:
BACKGROUND: Genetic susceptibility to multiple sclerosis (MS) has been recognised for many years. Considerable data exist from the northern hemisphere regarding the familial recurrence risks for MS, but there are few data for the southern hemisphere and regions at lower latitude such as Australia. To investigate the interaction between environmental and genetic causative factors in MS, the authors undertook a familial recurrence risk study in three latitudinally distinct regions of Australia. METHODS: Immediate and extended family pedigrees have been collected for three cohorts of people with MS in Queensland, Victoria and Tasmania spanning 15° of latitude. Age of onset data from Queensland were utilised to estimate age-adjusted recurrence rates. RESULTS: Recurrence risks in Australia were significantly lower than in studies from northern hemisphere populations. The age-adjusted risk for siblings across Australia was 2.13% compared with 3.5% for the northern hemisphere. A similar pattern was seen for other relatives. The risks to relatives were proportional to the population risks for each site, and hence the sibling recurrence-risk ratio (λ(s)) was similar across all sites. DISCUSSION: The familial recurrence risk of MS in Australia is lower than in previously reported studies. This is directly related to the lower population prevalence of MS. The overall genetic susceptibility in Australia as measured by the λ(s) is similar to the northern hemisphere, suggesting that the difference in population risk is explained largely by environmental factors rather than by genetic admixture.
Resumo:
Aim: To describe the recruitment, ophthalmic examination methods and distribution of ocular biometry of participants in the Norfolk Island Eye Study, who were individuals descended from the English Bounty mutineers and their Polynesian wives. Methods: All 1,275 permanent residents of Norfolk Island aged over 15 years were invited to participate, including 602 individuals involved in a 2001 cardiovascular disease study. Participants completed a detailed questionnaire and underwent a comprehensive eye assessment including stereo disc and retinal photography, ocular coherence topography and conjunctival autofluorescence assessment. Additionally, blood or saliva was taken for DNA testing. Results: 781 participants aged over 15 years were seen (54% female), comprising 61% of the permanent Island population. 343 people (43.9%) could trace their family history to the Pitcairn Islanders (Norfolk Island Pitcairn Pedigree). Mean anterior chamber depth was 3.32mm, mean axial length (AL) was 23.5mm, and mean central corneal thickness was 546 microns. There were no statistically significant differences in these characteristics between persons with and without Pitcairn Island ancestry. Mean intra-ocular pressure was lower in people with Pitcairn Island ancestry: 15.89mmHg compared to those without Pitcairn Island ancestry 16.49mmHg (P = .007). The mean keratometry value was lower in people with Pitcairn Island ancestry (43.22 vs. 43.52, P = .007). The corneas were flatter in people of Pitcairn ancestry but there was no corresponding difference in AL or refraction. Conclusion: Our study population is highly representative of the permanent population of Norfolk Island. Ocular biometry was similar to that of other white populations. Heritability estimates, linkage analysis and genome-wide studies will further elucidate the genetic determinants of chronic ocular diseases in this genetic isolate.
Resumo:
Background Migraine is a polygenic multifactorial disease, possessing environmental and genetic causative factors with multiple involved genes. Mutations in various ion channel genes are responsible for a number of neurological disorders. KCNN3 is a neuronal small conductance calcium-activated potassium channel gene that contains two polyglutamine tracts, encoded by polymorphic CAG repeats in the gene. This gene plays a critical role in determining the firing pattern of neurons and acts to regulate intracellular calcium channels. Methods The present association study tested whether length variations in the second (more 3') polymorphic CAG repeat in exon 1 of the KCNN3 gene, are involved in susceptibility to migraine with and without aura (MA and MO). In total 423 DNA samples from unrelated individuals, of which 202 consisted of migraine patients and 221 non-migraine controls, were genotyped and analysed using a fluorescence labelled primer set on an ABI310 Genetic Analyzer. Allele frequencies were calculated from observed genotype counts for the KCNN3 polymorphism. Analysis was performed using standard contingency table analysis, incorporating the chi-squared test of independence and CLUMP analysis. Results Overall, there was no convincing evidence that KCNN3 CAG lengths differ between Caucasian migraineurs and controls, with no significant difference in the allelic length distribution of CAG repeats between the population groups (P = 0.090). Also the MA and MO subtypes did not differ significantly between control allelic distributions (P > 0.05). The prevalence of the long CAG repeat (>19 repeats) did not reach statistical significance in migraineurs (P = 0.15), nor was there a significant difference between the MA and MO subgroups observed compared to controls (P = 0.46 and P = 0.09, respectively), or between MA vs MO (P = 0.40). Conclusion This association study provides no evidence that length variations of the second polyglutamine array in the N-terminus of the KCNN3 channel exert an effect in the pathogenesis of migraine.
Resumo:
Migraine, with and without aura (MA and MO), is a prevalent and complex neurovascular disorder that is likely to be influenced by multiple genes some of which may be capable of causing vascular changes leading to disease onset. This study was conducted to determine whether the ACE I/D gene variant is involved in migraine risk and whether this variant might act in combination with the previously implicated MTHFR C677T genetic variant in 270 migraine cases and 270 matched controls. Statistical analysis of the ACE I/D variant indicated no significant difference in allele or genotype frequencies (P > 0.05). However, grouping of genotypes showed a modest, yet significant, over-representation of the DD/ID genotype in the migraine group (88%) compared to controls (81%) (OR of 1.64, 95% CI: 1.00–2.69, P = 0.048). Multivariate analysis, including genotype data for the MTHFR C677T, provided evidence that the MTHFR (TT) and ACE (ID/DD) genotypes act in combination to increase migraine susceptibility (OR = 2.18, 95% CI: 1.15–4.16, P = 0.018). This effect was greatest for the MA subtype where the genotype combination corresponded to an OR of 2.89 (95% CI:1.47–5.72, P = 0.002). In Caucasians, the ACE D allele confers a weak independent risk to migraine susceptibility and also appears to act in combination with the C677T variant in the MTHFR gene to confer a stronger influence on the disease.
Resumo:
Migraine is a common neurological condition with a complex mode of inheritance. Steroid hormones have long been implicated in migraine, although their role remains unclear. Our investigation considered that genes involved in hormonal pathways may play a role in migraine susceptibility. We therefore investigated the androgen receptor (AR) CAG repeat, and the progesterone receptor (PR) PROGINS insert by cross-sectional association analysis. The results showed no association with the AR CAG repeat in our study group of 275 migraineurs and 275 unrelated controls. Results of the PR PROGINS analysis showed a significant difference in the same cohort, and in an independent follow-up study population of 300 migraineurs and 300 unrelated controls. Analysis of the genotypic risk groups of both populations together indicated that individuals who carried the PROGINS insert were 1.8 times more likely to suffer migraine. Interaction analysis of the PROGINS variant with our previously reported associated ESR1 594A variant showed that individuals who possessed at least one copy of both risk alleles were 3.2 times more likely to suffer migraine. Hence, variants of these steroid hormone receptor genes appear to act synergistically to increase the risk of migraine by a factor of three.
Resumo:
BACKGROUND: US Centers for Disease Control guidelines recommend replacement of peripheral intravenous (IV) catheters no more frequently than every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bloodstream infection. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation. Costs associated with routine replacement may be considerable. This is an update of a review first published in 2010. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present.