927 resultados para 860[729.1].07[Sarduy]


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: At present there are no large scale nationally-representative studies from Sri Lanka on the prevalence and associations of Diabetic Retinopathy (DR). The present study aims to evaluate the prevalence and risk factors for DR in a community-based nationally-representative sample of adults with self-reported diabetes mellitus from Sri Lanka. Methods: A cross-sectional community-based national study among 5,000 adults (≥18 years) was conducted in Sri Lanka, using a multi-stage stratified cluster sampling technique. An interviewer-administered questionnaire was used to collect data. Ophthalmological evaluation of patients with ‘known’ diabetes (previously diagnosed at a government hospital or by a registered medical practitioner) was done using indirect ophthalmoscopy. A binary-logistic regression analysis was performed with ‘presence of DR’ as the dichotomous dependent variable and other independent covariates. Results: Crude prevalence of diabetes was 12.0%(n=536),of which 344 were patients with ‘known’ diabetes.Mean age was 56.4 ± 10.9 years and 37.3% were males. Prevalence of any degree of DR was 27.4% (Males-30.5%, Females-25.6%; p = 0.41). In patients with DR, majority had NPDR (93.4%), while 5.3% had maculopathy. Patients with DR had a significantly longer duration of diabetes than those without. In the binary-logistic regression analysis in all adults duration of diabetes (OR:1.07), current smoking (OR:1.67) and peripheral neuropathy (OR:1.72)all were significantly associated with DR. Conclusions: Nearly 1/3rd of Sri Lankan adults with self-reported diabetes are having retinopathy. DR was associated with diabetes duration, cigarette smoking and peripheral neuropathy. However, further prospective follow up studies are required to establish causality for identified risk factors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose: The purpose of this study is to examine the feasibility of developing plasma predictive value biomarkers of cerebral ischemic stroke before imaging evidence is acquired. Methods: Blood samples were obtained from 198 patients who attended our neurology department as emergencies - with symptoms of vertigo, numbness, limb weakness, etc. - within 4.5 h of symptom onset, and before imaging evidence was obtained and medical treatment. After the final diagnosis was made by MRI/DWI/MRA or CTA in the following 24-72 h, the above cases were divided into two groups: stroke group and non-stroke group according to the imaging results. The levels of baseline plasma antithrombin III (AT-III), thrombin-antithrombin III (TAT), fibrinogen, D-dimer and high-sensitivity C-reactive protein (hsCRP) in the two groups were assayed. Results: The level of the baseline AT-III in the stroke group was 118.07 ± 26.22%, which was lower than that of the non-stroke group (283.83 ± 38.39%). The levels of TAT, fibrinogen, hsCRP were 7.24 ± 2.28 μg/L, 5.49 ± 0.98 g/L, and 2.17 ± 1.07 mg/L, respectively, which were higher than those of the non-stroke group (2.53 ± 1.23 μg/L, 3.35 ± 0.50 g/L, 1.82 ± 0.67 mg/L). All the P-values were less than 0.001. The D-dimer level was 322.57 ± 60.34 μg/L, which was slightly higher than that of the non-stroke group (305.76 ± 49.52 μg/L), but the P-value was 0.667. The sensitivities of AT-III, TAT, fibrinogen, D-dimer and hsCRP for predicting ischemic stroke tendency were 97.37%, 96.05%, 3.29%, 7.89%, but the specificity was 93.62%, 82.61%, 100% and 100%, respectively, and all the P-values were less than 0.001. High levels of D-dimer and hsCRP were mainly seen in the few cases with severe large-vessel infarction. Conclusions: Clinical manifestations of acute focal neurological deficits were associated with plasma AT-III and fibrinogen. These tests might help the risk assessment of acute cerebral ischemic stroke and/or TIA with infarction tendency in the superacute stage before positive imaging evidence is obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- Introduction There is limited understanding of how young adults’ driving behaviour varies according to long-term substance involvement. It is possible that regular users of amphetamine-type stimulants (i.e. ecstasy (MDMA) and methamphetamine) may have a greater predisposition to engage in drink/drug driving compared to non-users. We compare offence rates, and self-reported drink/drug driving rates, for stimulant users and non-users in Queensland, and examine contributing factors. - Methods The Natural History Study of Drug Use is a prospective longitudinal study using population screening to recruit a probabilistic sample of amphetamine-type stimulant users and non-users aged 19-23 years. At the 4 ½ year follow-up, consent was obtained to extract data from participants’ Queensland driver records (ATS users: n=217, non-users: n=135). Prediction models were developed of offence rates in stimulant users controlling for factors such as aggression and delinquency. - Results Stimulant users were more likely than non-users to have had a drink-driving offence (8.7% vs. 0.8%, p < 0.001). Further, about 26% of ATS users and 14% of non-users self-reported driving under the influence of alcohol during the last 12 months. Among stimulant users, drink-driving was independently associated with last month high-volume alcohol consumption (Incident Rate Ratio (IRR): 5.70, 95% CI: 2.24-14.52), depression (IRR: 1.28, 95% CI: 1.07-1.52), low income (IRR: 3.57, 95% CI: 1.12-11.38), and male gender (IRR: 5.40, 95% CI: 2.05-14.21). - Conclusions Amphetamine-type stimulant use is associated with increased long-term risk of drink-driving, due to a number of behavioural and social factors. Inter-sectoral approaches which target long-term behaviours may reduce offending rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the association between preoperative quality of life (QoL) and postoperative adverse events in women treated for endometrial cancer. Methods: 760 women with apparent Stage I endometrial cancer were randomised into a clinical trial evaluating laparoscopic versus open surgery. This analysis includes women with preoperative QoL measurements, from the Functional Assessment of Cancer Therapy- General (FACT-G) questionnaire, and who were followed up for at least 6 weeks after surgery (n=684). The outcomes for this study were defined as (1) the occurrence of moderate to severe AEs adverse events within 6 months (Common Toxicology Criteria (CTC) grade ≥3); and (2) any Serious Adverse Event (SAE). The association between preoperative QoL and the occurrence of AE was examined, after controlling for baseline comorbidity and other factors. Results: After adjusting for other factors, odds of occurrence of AE of CTC grade ≥3 were significantly increased with each unit decrease in baseline FACT-G score (OR=1.02, 95% CI 1.00-1.03, p=0.030), which was driven by physical well-being (PWB) (OR=1.09, 95% CI 1.04-1.13, p=0.0002) and functional well-being subscales (FWB) (OR=1.04, 95% CI 1.00-1.07, p=0.035). Similarly, odds of SAE occurrence were significantly increased with each unit decrease in baseline FACT-G score (OR=1.02, 95% CI 1.01-1.04, p=0.011), baseline PWB (OR=1.11, 95% CI 1.06-1.16, p<0.0001) or baseline FWB subscales (OR=1.05, 95% CI 1.01-1.10, p=0.0077). Conclusion: Women with early endometrial cancer presenting with lower QoL prior to surgery are at higher risk of developing a serious adverse event following surgery. Funding: Cancer Council Queensland, Cancer Council New South Wales, Cancer Council Victoria, Cancer Council, Western Australia; NHMRC project grant 456110; Cancer Australia project grant 631523; The Women and Infants Research Foundation, Western Australia; Royal Brisbane and Women’s Hospital Foundation; Wesley Research Institute; Gallipoli Research Foundation; Gynetech; TYCO Healthcare, Australia; Johnson and Johnson Medical, Australia; Hunter New England Centre for Gynaecological Cancer; Genesis Oncology Trust; and Smart Health Research Grant QLD Health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To quantify and compare the treatment effect and risk of bias of trials reporting biomarkers or intermediate outcomes (surrogate outcomes) versus trials using final patient relevant primary outcomes. Design Meta-epidemiological study. Data sources All randomised clinical trials published in 2005 and 2006 in six high impact medical journals: Annals of Internal Medicine, BMJ, Journal of the American Medical Association, Lancet, New England Journal of Medicine, and PLoS Medicine. Study selection Two independent reviewers selected trials. Data extraction Trial characteristics, risk of bias, and outcomes were recorded according to a predefined form. Two reviewers independently checked data extraction. The ratio of odds ratios was used to quantify the degree of difference in treatment effects between the trials using surrogate outcomes and those using patient relevant outcomes, also adjusted for trial characteristics. A ratio of odds ratios >1.0 implies that trials with surrogate outcomes report larger intervention effects than trials with patient relevant outcomes. Results 84 trials using surrogate outcomes and 101 using patient relevant outcomes were considered for analyses. Study characteristics of trials using surrogate outcomes and those using patient relevant outcomes were well balanced, except for median sample size (371 v 741) and single centre status (23% v 9%). Their risk of bias did not differ. Primary analysis showed trials reporting surrogate endpoints to have larger treatment effects (odds ratio 0.51, 95% confidence interval 0.42 to 0.60) than trials reporting patient relevant outcomes (0.76, 0.70 to 0.82), with an unadjusted ratio of odds ratios of 1.47 (1.07 to 2.01) and adjusted ratio of odds ratios of 1.46 (1.05 to 2.04). This result was consistent across sensitivity and secondary analyses. Conclusions Trials reporting surrogate primary outcomes are more likely to report larger treatment effects than trials reporting final patient relevant primary outcomes. This finding was not explained by differences in the risk of bias or characteristics of the two groups of trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Studies of mid-aged adults provide evidence of a relationship between sitting-time and all-cause mortality, but evidence in older adults is limited. The aim is to examine the relationship between total sitting-time and all-cause mortality in older women. Methods The prospective cohort design involved 6656 participants in the Australian Longitudinal Study on Women's Health who were followed for up to 9 years (2002, age 76–81, to 2011, age 85–90). Self-reported total sitting-time was linked to all-cause mortality data from the National Death Index from 2002 to 2011. Cox proportional hazard models were used to examine the relationship between sitting-time and all-cause mortality, with adjustment for potential sociodemographic, behavioural and health confounders. Results There were 2003 (30.1%) deaths during a median follow-up of 6 years. Compared with participants who sat <4 h/day, those who sat 8–11 h/day had a 1.45 times higher risk of death and those who sat ≥11 h/day had a 1.65 times higher risk of death. These risks remained after adding sociodemographic and behavioural covariates, but were attenuated after adjustment for health covariates. A significant interaction (p=0.02) was found between sitting-time and physical activity (PA), with increased mortality risk for prolonged sitting only among participants not meeting PA guidelines (HR for sitting ≥8 h/day: 1.31, 95% CI 1.07 to 1.61); HR for sitting ≥11 h/day: 1.47, CI 1.15 to 1.93). Conclusions Prolonged sitting-time was positively associated with all-cause mortality. Women who reported sitting for more than 8 h/day and did not meet PA guidelines had an increased risk of dying within the next 9 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electrical activation energy and optical band-gap of GeSe and GeSbSe thin films prepared by flash evaporation on to glass substrates have been determined. The conductivities of the films were found to be given by Image , the activation energy Ea being 0.53 eV and 0.40 eV for GeSe and GeSbSe respectively. The optical absorption constant α near the absorption edge could be described by Image from which the optical band-gaps E0 were found to be 1.01 eV for GeSe and 0.67 eV for GeSbSe at 300°K. At 110°K the corresponding values of E0 were 1.07 eV and 0.735 eV respectively. The significance of these values is discussed in relation to those of other amorphous semiconductors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitrogen fertiliser is a major source of atmospheric N2O and over recent years there is growing evidence for a non-linear, exponential relationship between N fertiliser application rate and N2O emissions. However, there is still high uncertainty around the relationship of N fertiliser rate and N2O emissions for many cropping systems. We conducted year-round measurements of N2O emission and lint yield in four N rate treatments (0, 90, 180 and 270 kg N ha-1) in a cotton-fallow rotation on a black vertosol in Australia. We observed a nonlinear exponential response of N2O emissions to increasing N fertiliser rates with cumulative annual N2O emissions of 0.55 kg N ha-1, 0.67kg N ha-1, 1.07 kg N ha-1 and 1.89 kg N ha-1 for the four respective N fertiliser rates while no N response to yield occurred above 180N. The N fertiliser induced annual N2O EF factors increased from 0.13% to 0.29% and 0.50% for the 90N, 180N and 270N treatments respectively, significantly lower than the IPCC Tier 1 default value (1.0 %). This non-linear response suggests that an exponential N2O emissions model may be more appropriate for use in estimating emission of N2O from soils cultivated to cotton in Australia. It also demonstrates that improved agricultural N management practices can be adopted in cotton to substantially reduce N2O emissions without affecting yield potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Placental abruption, one of the most significant causes of perinatal mortality and maternal morbidity, occurs in 0.5-1% of pregnancies. Its etiology is unknown, but defective trophoblastic invasion of the spiral arteries and consequent poor vascularization may play a role. The aim of this study was to define the prepregnancy risk factors of placental abruption, to define the risk factors during the index pregnancy, and to describe the clinical presentation of placental abruption. We also wanted to find a biochemical marker for predicting placental abruption early in pregnancy. Among women delivering at the University Hospital of Helsinki in 1997-2001 (n=46,742), 198 women with placental abruption and 396 control women were identified. The overall incidence of placental abruption was 0.42%. The prepregnancy risk factors were smoking (OR 1.7; 95% CI 1.1, 2.7), uterine malformation (OR 8.1; 1.7, 40), previous cesarean section (OR 1.7; 1.1, 2.8), and history of placental abruption (OR 4.5; 1.1, 18). The risk factors during the index pregnancy were maternal (adjusted OR 1.8; 95% CI 1.1, 2.9) and paternal smoking (2.2; 1.3, 3.6), use of alcohol (2.2; 1.1, 4.4), placenta previa (5.7; 1.4, 23.1), preeclampsia (2.7; 1.3, 5.6) and chorioamnionitis (3.3; 1.0, 10.0). Vaginal bleeding (70%), abdominal pain (51%), bloody amniotic fluid (50%) and fetal heart rate abnormalities (69%) were the most common clinical manifestations of placental abruption. Retroplacental blood clot was seen by ultrasound in 15% of the cases. Neither bleeding nor pain was present in 19% of the cases. Overall, 59% went into preterm labor (OR 12.9; 95% CI 8.3, 19.8), and 91% were delivered by cesarean section (34.7; 20.0, 60.1). Of the newborns, 25% were growth restricted. The perinatal mortality rate was 9.2% (OR 10.1; 95% CI 3.4, 30.1). We then tested selected biochemical markers for prediction of placental abruption. The median of the maternal serum alpha-fetoprotein (MSAFP) multiples of median (MoM) (1.21) was significantly higher in the abruption group (n=57) than in the control group (n=108) (1.07) (p=0.004) at 15-16 gestational weeks. In multivariate analysis, elevated MSAFP remained as an independent risk factor for placental abruption, adjusting for parity ≥ 3, smoking, previous placental abruption, preeclampsia, bleeding in II or III trimester, and placenta previa. MSAFP ≥ 1.5 MoM had a sensitivity of 29% and a false positive rate of 10%. The levels of the maternal serum free beta human chorionic gonadotrophin MoM did not differ between the cases and the controls. None of the angiogenic factors (soluble endoglin, soluble fms-like tyrosine kinase 1, or placental growth factor) showed any difference between the cases (n=42) and the controls (n=50) in the second trimester. The levels of C-reactive protein (CRP) showed no difference between the cases (n=181) and the controls (n=261) (median 2.35 mg/l [interquartile range {IQR} 1.09-5.93] versus 2.28 mg/l [IQR 0.92-5.01], not significant) when tested in the first trimester (mean 10.4 gestational weeks). Chlamydia pneumoniae specific immunoglobulin G (IgG) and immunoglobulin A (IgA) as well as C. trachomatis specific IgG, IgA and chlamydial heat-shock protein 60 antibody rates were similar between the groups. In conclusion, although univariate analysis identified many prepregnancy risk factors for placental abruption, only smoking, uterine malformation, previous cesarean section and history of placental abruption remained significant by multivariate analysis. During the index pregnancy maternal alcohol consumption and smoking and smoking by the partner turned out to be the major independent risk factors for placental abruption. Smoking by both partners multiplied the risk. The liberal use of ultrasound examination contributed little to the management of women with placental abruption. Although second-trimester MSAFP levels were higher in women with subsequent placental abruption, clinical usefulness of this test is limited due to low sensitivity and high false positive rate. Similarly, angiogenic factors in early second trimester, or CRP levels, or chlamydial antibodies in the first trimester failed to predict placental abruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since national differences exist in genes, environment, diet and life habits and also in the use of postmenopausal hormone therapy (HT), the associations between different hormone therapies and the risk for breast cancer were studied among Finnish postmenopausal women. All Finnish women over 50 years of age who used HT were identified from the national medical reimbursement register, established in 1994, and followed up for breast cancer incidence (n= 8,382 cases) until 2005 with the aid of the Finnish Cancer Registry. The risk for breast cancer in HT users was compared to that in the general female population of the same age. Among women using oral or transdermal estradiol alone (ET) (n = 110,984) during the study period 1994-2002 the standardized incidence ratio (SIR) for breast cancer in users for < 5 years was 0.93 (95% confidence interval (CI) 0.80–1.04), and in users for ≥ 5 years 1.44 (1.29–1.59). This therapy was associated with similar rises in ductal and lobular types of breast cancer. Both localized stage (1.45; 1.26–1.66) and cancers spread to regional nodes (1.35; 1.09–1.65) were associated with the use of systemic ET. Oral estriol or vaginal estrogens were not accompanied with a risk for breast cancer. The use of estrogen-progestagen therapy (EPT) in the study period 1994-2005 (n= 221,551) was accompanied with an increased incidence of breast cancer (1.31;1.20-1.42) among women using oral or transdermal EPT for 3-5 years, and the incidence increased along with the increasing duration of exposure (≥10 years, 2.07;1.84-2.30). Continuous EPT entailed a significantly higher (2.44; 2.17-2.72) breast cancer incidence compared to sequential EPT (1.78; 1.64-1.90) after 5 years of use. The use of norethisterone acetate (NETA) as a supplement to estradiol was accompanied with a higher incidence of breast cancer after 5 years of use (2.03; 1.88-2.18) than that of medroxyprogesterone acetate (MPA) (1.64; 1.49-1.79). The SIR for the lobular type of breast cancer was increased within 3 years of EPT exposure (1.35; 1.18-1.53), and the incidence of the lobular type of breast cancer (2.93; 2.33-3.64) was significantly higher than that of the ductal type (1.92; 1.67-2.18) after 10 years of exposure. To control for some confounding factors, two case control studies were performed. All Finnish women between the ages of 50-62 in 1995-2007 and diagnosed with a first invasive breast cancer (n= 9,956) were identified from the Finnish Cancer Registry, and 3 controls of similar age (n=29,868) without breast cancer were retrieved from the Finnish national population registry. Subjects were linked to the medical reimbursement register for defining the HT use. The use of ET was not associated with an increased risk for breast cancer (1.00; 0.92-1.08). Neither was progestagen-only therapy used less than 3 years. However, the use of tibolone was associated with an elevated risk for breast cancer (1.39; 1.07-1.81). The case-control study confirmed the results of EPT regarding sequential vs. continuous use of progestagen, including progestagen released continuously by an intrauterine device; the increased risk was seen already within 3 years of use (1.65;1.32-2.07). The dose of NETA was not a determinant as regards the breast cancer risk. Both systemic ET, and EPT are associated with an elevation in the risk for breast cancer. These risks resemble to a large extent those seen in several other countries. The use of an intrauterine system alone or as a complement to systemic estradiol is also associated with a breast cancer risk. These data emphasize the need for detailed information to women who are considering starting the use of HT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autoimmune diseases are more common in dogs than in humans and are already threatening the future of some highly predisposed dog breeds. Susceptibility to autoimmune diseases is controlled by environmental and genetic factors, especially the major histocompatibility complex (MHC) gene region. Dogs show a similar physiology, disease presentation and clinical response as humans, making them an excellent disease model for autoimmune diseases common to both species. The genetic background of canine autoimmune disorders is largely unknown, but recent annotation of the dog genome and subsequent development of new genomic tools offer a unique opportunity to map novel autoimmune genes in various breeds. Many autoimmune disorders show breed-specific enrichment, supporting a strong genetic background. Furthermore, the presence of hundreds of breeds as genetic isolates facilitates gene mapping in complex autoimmune disorders. Identification of novel predisposing genes establishes breeds as models and may reveal novel candidate genes for the corresponding human disorders. Genetic studies will eventually shed light on common biological functions and interactions between genes and the environment. This study aimed to identify genetic risk factors in various autoimmune disorders, including systemic lupus erythematosus (SLE)-related diseases, comprising immune-mediated rheumatic disease (IMRD) and steroid-responsive meningitis arteritis (SMRA) as well as Addison s disease (AD) in Nova Scotia Duck Tolling Retrievers (NSDTRs) and chronic superficial keratitis (CSK) in German Shepherd dogs (GSDs). We used two different approaches to identify genetic risk factors. Firstly, a candidate gene approach was applied to test the potential association of MHC class II, also known as a dog leukocyte antigen (DLA) in canine species. Secondly, a genome-wide association study (GWAS) was performed to identify novel risk loci for SLE-related disease and AD in NSDTRs. We identified DLA risk haplotypes for an IMRD subphenotype of SLE-related disease, AD and CSK, but not in SMRA, and show that the MHC class II gene region is a major genetic risk factor in canine autoimmune diseases. An elevated risk was found for IMRD in dogs that carried the DLA-DRB1*00601/DQA1*005011/DQB1*02001 haplotype (OR = 2.0, 99% CI = 1.03-3.95, p = 0.01) and for ANA-positive IMRD dogs (OR = 2.3, 99% CI = 1.07-5.04, p-value 0.007). We also found that DLA-DRB1*01502/DQA*00601/DQB1*02301 haplotype was significantly associated with AD in NSDTRs (OR = 2.1, CI = 1.0-4.4, P = 0.044) and the DLA-DRB1*01501/DQA1*00601/DQB1*00301 haplotype with the CSK in GSDs (OR=2.67, CI=1.17-6.44, p= 0.02). In addition, we found that homozygosity for the risk haplotype increases the risk for each disease phenotype and that an overall homozygosity for the DLA region predisposes to CSK and AD. Our results have enabled the development of genetic tests to improve breeding practices by avoiding the production of puppies homozygous for risk haplotypes. We also performed the first successful GWAS for a complex disease in dogs. With less than 100 cases and 100 controls, we identified five risk loci for SLE-related disease and AD and found strong candidate genes involved in a novel T-cell activation pathway. We show that an inbred dog population has fewer risk factors, but each of them has a stronger genetic risk. Ongoing studies aim to identify the causative mutations and bring new knowledge to help diagnostics, treatment and understanding of the aetiology of SLE-related diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study contributes to the executive stock option literature by looking at factors driving the introduction of such a compensation form on a firm level. Using a discrete decision model I test the explanatory power of several agency theory based variables and find strong support for predictability of the form of executive compensation. Ownership concentration and liquidity are found to have a significant negative effect on the probability of stock option adoption. Furtermore, I find evidence of CEO ownership, institutional ownership, investment intensity, and historical market return having a significant and a positive relationship to the likelihood of adopting a executive stock option program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microstructural dependence of electrical properties of (Ba, Sr)TiO3(BST) thin films were studied from the viewpoint of dc and ac electrical properties. The films were grown using a pulsed laser deposition technique in a temperature range of 300 to 600 degrees C, inducing changes in grain size, structure, and morphology. Consequently, two different types of films were realized, of which type I, was polycrystalline, multigrained, while type II was [100] oriented possessing a densely packed fibrous microstructure. Leakage current measurements were done at elevated temperatures to provide evidence of the conduction mechanism present in these films. The results revealed a contribution from both electronic and ionic conduction. In the case of type I films, two trapping levels were identified with energies around 0.5 and 2.73 eV, which possibly originate from oxygen vacancies V-O and Ti3+ centers, respectively. These levels act as shallow and deep traps and are reflected in the current-voltage characteristics of the BST thin films. The activation energy associated with oxygen vacancy motion in this case was obtained as 1.28 eV. On the contrary, type II films showed no evidence of deep trap energy levels, while the identified activation energy associated with shallow traps was obtained as 0.38 eV. The activation energy obtained for oxygen vacancy motion in type II films was around 1.02 eV. The dc measurement results were further elucidated through ac impedance analysis, which revealed a grain boundary dominated response in type I in comparison to type II films where grain response is highlighted. A comparison of the mean relaxation time of the two films revealed three orders of magnitude higher relaxation time in the case of type I films. Due to smaller grain size in type I films the grains were considered to be completely depleted giving rise to only grain boundary response for the bulk of the film. The activation energy obtained from conductivity plots agree very well with that of dc measurements giving values 1.3 and 1.07 eV for type I and type II films, respectively. Since oxygen vacancy transport have been identified as the origin of resistance degradation in BST thin films, type I films with their higher value of activation energy for oxygen ion mobility explains the improvement in breakdown characteristics under constant high dc field stress. The role of microstructure in controlling the rate of degradation is found useful in this instance to enhance the film properties under high electric field stresses. (C) 2000 American Institute of Physics. [S0021-8979(00)00418-7].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical properties provide valuable information about the nature and behavior of rocks and minerals. The changes in rock physical properties generate petrophysical contrasts between various lithologies, for example, between shocked and unshocked rocks in meteorite impact structures or between various lithologies in the crust. These contrasts may cause distinct geophysical anomalies, which are often diagnostic to their primary cause (impact, tectonism, etc). This information is vital to understand the fundamental Earth processes, such as impact cratering and associated crustal deformations. However, most of the present day knowledge of changes in rock physical properties is limited due to a lack of petrophysical data of subsurface samples, especially for meteorite impact structures, since they are often buried under post-impact lithologies or eroded. In order to explore the uppermost crust, deep drillings are required. This dissertation is based on the deep drill core data from three impact structures: (i) the Bosumtwi impact structure (diameter 10.5 km, 1.07 Ma age; Ghana), (ii) the Chesapeake Bay impact structure (85 km, 35 Ma; Virginia, U.S.A.), and (iii) the Chicxulub impact structure (180 km, 65 Ma; Mexico). These drill cores have yielded all basic lithologies associated with impact craters such as post-impact lithologies, impact rocks including suevites and breccias, as well as fractured and unfractured target rocks. The fourth study case of this dissertation deals with the data of the Paleoproterozoic Outokumpu area (Finland), as a non-impact crustal case, where a deep drilling through an economically important ophiolite complex was carried out. The focus in all four cases was to combine results of basic petrophysical studies of relevant rocks of these crustal structures in order to identify and characterize various lithologies by their physical properties and, in this way, to provide new input data for geophysical modellings. Furthermore, the rock magnetic and paleomagnetic properties of three impact structures, combined with basic petrophysics, were used to acquire insight into the impact generated changes in rocks and their magnetic minerals, in order to better understand the influence of impact. The obtained petrophysical data outline the various lithologies and divide rocks into four domains. Based on target lithology the physical properties of the unshocked target rocks are controlled by mineral composition or fabric, particularly porosity in sedimentary rocks, while sediments result from diverse sedimentation and diagenesis processes. The impact rocks, such as breccias and suevites, strongly reflect the impact formation mechanism and are distinguishable from the other lithologies by their density, porosity and magnetic properties. The numerous shock features resulting from melting, brecciation and fracturing of the target rocks, can be seen in the changes of physical properties. These features include an increase in porosity and subsequent decrease in density in impact derived units, either an increase or a decrease in magnetic properties (depending on a specific case), as well as large heterogeneity in physical properties. In few cases a slight gradual downward decrease in porosity, as a shock-induced fracturing, was observed. Coupled with rock magnetic studies, the impact generated changes in magnetic fraction the shock-induced magnetic grain size reduction, hydrothermal- or melting-related magnetic mineral alteration, shock demagnetization and shock- or temperature-related remagnetization can be seen. The Outokumpu drill core shows varying velocities throughout the drill core depending on the microcracking and sample conditions. This is similar to observations by Kern et al., (2009), who also reported the velocity dependence on anisotropy. The physical properties are also used to explain the distinct crustal reflectors as observed in seismic reflection studies in the Outokumpu area. According to the seismic velocity data, the interfaces between the diopside-tremolite skarn layer and either serpentinite, mica schist or black schist are causing the strong seismic reflectivities.