857 resultados para Expatriate adjustment
Resumo:
Experiments were conducted in the nonequilibrium region of a free mixing layer with unequal freestream velocities. Four velocity ratios U(1)/U(2) of 0.32, 0.46, 0.74, and 0.96 were used in this investigation. The growth of the shear layer as well as the velocity adjustment in the near wake were examined. There was reasonable agreement between the measured mean velocity profiles and those computed using the K-epsilon turbulence model. Some periodic turbulence velocity fluctuations were observed in the mixing layer, but their frequency remained the same along the flow.
Resumo:
We investigate the extent to which individuals’ global motivation (self-determined and non-self-determined types) influences adjustment (anxiety, positive reappraisal) and engagement (intrinsic motivation, task performance) in reaction to changes to the level of work control available during a work simulation. Participants (N = 156) completed 2 trials of an inbox activity under conditions of low or high work control—with the ordering of these levels varied to create an increase, decrease, or no change in work control. In support of the hypotheses, results revealed that for more self-determined individuals, high work control led to the increased use of positive reappraisal. Follow-up moderated mediation analyses revealed that the increases in positive reappraisal observed for self-determined individuals in the conditions in which work control was high by Trial 2 consequently increased their intrinsic motivation toward the task. For more non-self-determined individuals, high work control (as well as changes in work control) led to elevated anxiety. Follow-up moderated mediation analyses revealed that the increases in anxiety observed for non-self-determined individuals in the high-to-high work control condition consequently reduced their task performance. It is concluded that adjustment to a demanding work task depends on a fit between individuals’ global motivation and the work control available, which has consequences for engagement with demanding work.
Resumo:
Contains the papers of the Society founded in 1938 by recent German speaking Jewish immigrants to Boston to assist their initial adjustment to the economic, cultural, spiritual, and social life of the American community and subsequently, to provide mutual assistance to its membership and aid to other immigrants.
Resumo:
Probiotic supplements are single or mixed strain cultures of live microorganisms that benefit the host by improving the properties of the indigenous microflora (Seo et al 2010). In a pilot study at the University of Queensland, Norton et al (2008) found that Bacillus amyloliquefaciens Strain H57 (H57), primarily investigated as an inoculum to make high-quality hay, improved feed intake and nitrogen utilisation over several weeks in pregnant ewes. The purpose of the following study was to further challenge the potential of H57 -to show it survives the steam-pelleting process, and that it improves the performance of ewes fed pellets based on an agro-industrial by-product with a reputation for poor palatability, palm kernel meal (PKM), (McNeill 2013). Thirty-two first-parity White Dorper ewes (day 37 of pregnancy, mean liveweight = 47.3 kg, mean age = 15 months) were inducted into individual pens in the animal house at the University of Queensland, Gatton. They were adjusted onto PKM-based pellets (g/kg drymatter (DM): PKM, 408; sorghum, 430; chick pea hulls, 103; minerals and vitamins; Crude protein, 128; ME: 11.1MJ/kg DM) until day 89 of pregnancy and thereafter fed a predominately pelleted diet incorporating with or without H57 spores (10 9 colony forming units (cfu)/kg pellet, as fed), plus 100g/ewe/day oaten chaff, until day 7 of lactation. From day 7 to 20 of lactation the pelleted component of the diet was steadily reduced to be replaced by a 50:50 mix of lucerne: oaten chaff, fed ad libitum, plus 100g/ewe/day of ground sorghum grain with or without H57 (10 9 cfu/ewe/day). The period of adjustment in pregnancy (day 37-89) extended beyond expectations due to some evidence of mild ruminal acidosis after some initially high intakes that were followed by low intakes. During that time the diet was modified, in an attempt to improve palatability, by the addition of oaten chaff and the removal of an acidifying agent (NH4Cl) that was added initially to reduce the risk of urinary calculi. Eight ewes were removed due to inappetence, leaving 24 ewes to start the trial at day 90 of pregnancy. From day 90 of pregnancy until day 63 of lactation, liveweights of the ewes and their lambs were determined weekly and at parturition. Feed intakes of the ewes were determined weekly. Once lambing began, 1 ewe was removed as it gave birth to twin lambs (whereas the rest gave birth to a single lamb), 4 due to the loss of their lambs (2 to dystocia), and 1 due to copper toxicity. The PKM pellets were suspected to be the cause of the copper toxicity and so were removed in early lactation. Hence, the final statistical analysis using STATISTICA 8 (Repeated measures ANOVA for feed intake, One-way ANOVA for liveweight change and birth weight) was completed on 23 ewes for the pregnancy period (n = 11 fed H57; n = 12 control), and 18 ewes or lambs for the lactation period (n = 8 fed H57; n = 10 control). From day 90 of pregnancy until parturition the H57 supplemented ewes ate 17 more DM (g/day: 1041 vs 889, sed = 42.4, P = 0.04) and gained more liveweight (g/day: 193 vs 24.0, sed = 25.4, P = 0.0002), but produced lambs with a similar birthweight (kg: 4.18 vs 3.99, sed = 0.19, P = 0.54). Over the 63 days of lactation the H57 ewes ate similar amounts of DM but grew slower than the control ewes (g/day: 1.5 vs 97.0, sed = 21.7, P = 0.012). The lambs of the H57 ewes grew faster than those of the control ewes for the first 21 days of lactation (g/day: 356 vs 265, sed = 16.5, P = 0.006). These data support the findings of Norton et al (2008) and Kritas et al (2006) that certain Bacillus spp. supplements can improve the performance of pregnant and lactating ewes. In the current study we particularly highlighted the capacity of H57 to stimulate immature ewes to continue to grow maternal tissue through pregnancy, possibly through an enhanced appetite, which appeared then to stimulate a greater capacity to partition nutrients to their lambs through milk, at least for the first few weeks of lactation, a critical time for optimising lamb survival. To conclude, H57 can survive the steam pelleting process to improve feed intake and maternal liveweight gain in late pregnancy, and performance in early lactation, of first-parity ewes fed a diet based on PKM.
Resumo:
In this research, the cooperation between Finnish municipalities and Evangelical Lutheran parishes is studied from the standpoint of institutional interaction. The most essential theoretical background for the study is the differentiation thesis of the secularization theory. Cooperation from the viewpoints of both organizations is examined using the functional approach. Furthermore, the market theory and other theories are applied in order to place the studied phenomenon in the wider context of the theories of the sociology of religion. Sacralization in modern society and its relationship with the differentiation thesis of the secularization theory are in the theoretical foci. In addition, along with a descriptive examination of cooperation, the normative sides of the phenomenon are discussed. The survey was conducted among all municipalities and parishes in continental Finland. The questionnaires were sent to all municipal managers of youth work and afternoon activities and to all managers of child, youth and social work in the parishes. The response rate for the municipalities was 73.9 % and for the parishes 69.5 %. In addition, two qualitative data were utilized. The aim of the study is to scrutinize what kind of limitations of differentiation can be caused by the interaction between the secular and the religious. In order to solve the problem, an empirical study of sacralization in the modern context is required. For this purpose, the survey was carried out to determine the effects of the religious on the secular and the impact of the secular on the religious. In the articles of the study the following relationships are discussed: the positions of municipalities and parishes in relation to the state and civil society; cooperation in relation to differentiation; sacralization in relation to the differentiation thesis and cooperation in relation to pluralism. The results of the study highlighted the significance of the cooperation, which was contrary to the secularization theory connected to religious sacralization. The acceptance of the appearance of religion in cooperation and parishes support for municipal function was high in municipalities. Religious cooperation was more active than secular cooperation within all fields. This was also true between fields: religiously orientated child work was more active than the societally orientated social work of the church. Religious cooperation in modern fields of activity underlined sacralization. However, the acceptance of sacralization was weaker in cities than rural areas. Positive relationships between the welfare function of municipalities and the religious function of parishes emphasized the incompleteness of differentiation and the importance of sacralization. The relationship of the function of municipalities with parishes was neither negative nor neutral. Thus, in the most active fields, that is, child work and the traditional social work of the church, the orientation of parishes in cooperation supported the functions of both organizations. In more passive fields, that is, youth work and the societal social work of the church, parishes were orientated towards supporting the municipal function. The orientation of municipalities to religion underlined the perception that religious function is necessary for cooperation. However, the official character of cooperation supported accommodation to the requirements of societal pluralism. According to the results, sacralization can be effective also at the institutional level. The religious effect of voluntary cooperation means that religious sacralization can also readjust to modern society. At the same time, the results of the study stressed the importance of institutional autonomy. Thus, the public sector has a central role in successful cooperation. The conditions of cooperation are weakened if there is no official support of cooperation or adjustment to the individual rights of modern society. The results called into question the one-directional assumptions in the secularization paradigm and the modernization theory in the background. In these assumptions, religion that represents the traditional is seen to give way to the modern, especially at the institutional level. Lack of an interactional view was identified as a central weakness of the secularization paradigm. In the theoretical approach created in the study, an interactional view between religious and secular institutions was made possible by limiting the core of the differentiation thesis to autonomy. The counter forces of differentiation are despecialization and sacralization. These changes in the secularization theory bring about new interactivity on the institutional level. In addition to the interactional approach, that is, the secularization and sacralization theory created as a synthesis of the study, interaction between the religious and the secular is discussed from the standpoint of multiple modernities. The spiritual welfare role of religion is seen as a potential supporter of secular institutions. Religion is set theoretically amongst other ideologies and agents, which can create communal bonds in modern society. Key words: cooperation, municipalities, parishes, sacralization, secularization, modernization, multiple modernities, differentiation, interaction, democracy, secularism, pluralism, civil society
Resumo:
The doctoral thesis defined connections between circadian rhythm disruptions and health problems. Sleep debt, jet-lag, shift work, as well as transitions into and out of the daylight saving time may lead to circadian rhythm disruptions. Disturbed circadian rhythm causes sleep deprivation and decrease of mood and these effects may lead to higher accident rates and trigger mental illnesses. Circadian clock genes are involved in the regulation of the cell cycle and metabolism and thus unstable circadian rhythmicity may also lead to cancer development. In publications I-III it was explored how transitions into and out of the daylight saving time impact the sleep efficiency and the rest-activity cycles of healthy individuals. Also it was explored whether the effect of transition is different in fall as compared to spring, and whether there are subgroup specific differences in the adjustment to transitions into and out of daylight saving time. The healthy participants of studies I-III used actigraphs before and after the transitions and filled in the morningness-eveningness and seasonal pattern assessment questionnaires. In publication IV the incidence of hospital-treated accidents and manic episodes was explored two weeks before and two weeks after the transitions into and out of the daylight saving time in years 1987-2003. In publication V the relationship between circadian rhythm disruption and the prevalence of Non-Hodgkin lymphoma was studied. The study V consisted of all working aged Finns who participated in the national population census in 1970. For our study, all the cancers diagnosed during the years 1971-1995 were extracted from the Finnish Cancer Register and linked with the 1970 census files. In studies I-III it was noticed that transitions into and out of the daylight saving time disturbs the sleep-wake cycle and the sleep efficiency of the healthy participants. We also noticed that short sleepers were more sensitive than long sleepers for sudden changes in the circadian rhythm. Our results also indicated that adaptation to changes in the circadian rhythm is potentially sex, age and chronotype-specific. In study IV no significant increase in the occurrence of hospital treated accidents or manic episodes was noticed. However, interesting observations about the seasonal fluctuation of the occurrence rates of accidents and manic episodes were made. Study V revealed that there might be close relationship between circadian rhythm disruption and cancer. The prevalence of Non-Hodgkin lymphoma was the highest among night workers. The five publications included in this thesis together point out that disturbed circadian rhythms may have adverse effect on health. Disturbed circadian rhythms decrease the quality of sleep and weaken the sleep-wake cycle. A continuous circadian rhythm disruption may also predispose individuals to cancer development. Since circadian rhythm disruptions are common in modern society they might have a remarkable impact on the public health. Thus it is important to continue circadian rhythm research so that better prevention and treatment methods can be developed. Keywords: Circadian rhythm, daylight saving time, manic episodes, accidents, Non-Hodgkin lymphoma 11
Resumo:
Context: High bone mass (HBM), detected in 0.2% of dual-energy x-ray absorptiometry (DXA) scans, is characterized by raised body mass index, the basis for which is unclear. Objective: To investigate why body mass index is elevated in individuals with HBM, we characterized body composition and examined whether differences could be explained by bone phenotypes, eg, bone mass and/or bone turnover. Design, Setting, and Participants: We conducted a case-control study of 153 cases with unexplained HBM recruited from 4 UK centers by screening 219 088 DXA scans. Atotal of 138 first-degree relatives (of whom 51 had HBM) and 39 spouses were also recruited. Unaffected individuals served as controls. Main Outcome Measures: We measured fat mass, by DXA, and bone turnover markers. Results: Amongwomen, fat mass was inversely related to age in controls (P<.01), but not in HBM cases (P<.96) in whom mean fat mass was 8.9 [95% CI 4.7, 13.0] kg higher compared with controls (fully adjusted mean difference, P<.001). Increased fat mass in male HBM cases was less marked (gender interaction P = .03). Compared with controls, lean mass was also increased in female HBM cases (by 3.3 [1.2, 5.4] kg; P<.002); however, lean mass increases wereless marked than fat mass increases, resulting in 4.5% lower percentage lean mass in HBM cases (P<.001). Osteocalcin was also lower in female HBM cases compared with controls (by 2.8 [0.1, 5.5]μg/L; P = .04). Differences in fat mass were fully attenuated after hip bone mineral density (BMD) adjustment (P = .52) but unchanged after adjustment for bone turnover (P < .001), whereas the greater hip BMD in female HBM cases was minimally attenuated by fat mass adjustment (P<.001). Conclusions: HBM is characterized by a marked increase in fat mass in females, statistically explained by their greater BMD, but not by markers of bone turnover. Copyright © 2013 by The Endocrine Society.
Resumo:
CONTEXT: The role and importance of circulating sclerostin is poorly understood. High bone mass (HBM) caused by activating LRP5 mutations has been reported to be associated with increased plasma sclerostin concentrations; whether the same applies to HBM due to other causes is unknown. OBJECTIVE: Our objective was to determine circulating sclerostin concentrations in HBM. DESIGN AND PARTICIPANTS: In this case-control study, 406 HBM index cases were identified by screening dual-energy x-ray absorptiometry (DXA) databases from 4 United Kingdom centers (n = 219 088), excluding significant osteoarthritis/artifact. Controls comprised unaffected relatives and spouses. MAIN MEASURES: Plasma sclerostin; lumbar spine L1, total hip, and total body DXA; and radial and tibial peripheral quantitative computed tomography (subgroup only) were evaluated. RESULTS: Sclerostin concentrations were significantly higher in both LRP5 HBM and non-LRP5 HBM cases compared with controls: mean (SD) 130.1 (61.7) and 88.0 (39.3) vs 66.4 (32.3) pmol/L (both P < .001, which persisted after adjustment for a priori confounders). In combined adjusted analyses of cases and controls, sclerostin concentrations were positively related to all bone parameters found to be increased in HBM cases (ie, L1, total hip, and total body DXA bone mineral density and radial/tibial cortical area, cortical bone mineral density, and trabecular density). Although these relationships were broadly equivalent in HBM cases and controls, there was some evidence that associations between sclerostin and trabecular phenotypes were stronger in HBM cases, particularly for radial trabecular density (interaction P < .01). CONCLUSIONS: Circulating plasma sclerostin concentrations are increased in both LRP5 and non-LRP5 HBM compared with controls. In addition to the general positive relationship between sclerostin and DXA/peripheral quantitative computed tomography parameters, genetic factors predisposing to HBM may contribute to increased sclerostin levels.
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
Pioglitazone is a thiazolidinedione compound used in the treatment of type 2 diabetes. It has been reported to be metabolised by multiple cytochrome P450 (CYP) enzymes, including CYP2C8, CYP2C9 and CYP3A4 in vitro. The aims of this work were to identify the CYP enzymes mainly responsible for the elimination of pioglitazone in order to evaluate its potential for in vivo drug interactions, and to investigate the effects of CYP2C8- and CYP3A4-inhibiting drugs (gemfibrozil, montelukast, zafirlukast and itraconazole) on the pharmacokinetics of pioglitazone in healthy volunteers. In addition, the effect of induction of CYP enzymes on the pharmacokinetics of pioglitazone in healthy volunteers was investigated, with rifampicin as a model inducer. Finally, the effect of pioglitazone on CYP2C8 and CYP3A enzyme activity was examined in healthy volunteers using repaglinide as a model substrate. Study I was conducted in vitro using pooled human liver microsomes (HLM) and human recombinant CYP isoforms. Studies II to V were randomised, placebo-controlled cross-over studies with 2-4 phases each. A total of 10-12 healthy volunteers participated in each study. Pretreatment with clinically relevant doses with the inhibitor or inducer was followed by a single dose of pioglitazone or repaglinide, whereafter blood and urine samples were collected for the determination of drug concentrations. In vitro, the elimination of pioglitazone (1 µM) by HLM was markedly inhibited, in particular by CYP2C8 inhibitors, but also by CYP3A4 inhibitors. Of the recombinant CYP isoforms, CYP2C8 metabolised pioglitazone markedly, and CYP3A4 also had a significant effect. All of the tested CYP2C8 inhibitors (montelukast, zafirlukast, trimethoprim and gemfibrozil) concentration-dependently inhibited pioglitazone metabolism in HLM. In humans, gemfibrozil raised the area under the plasma concentration-time curve (AUC) of pioglitazone 3.2-fold (P < 0.001) and prolonged its elimination half-life (t½) from 8.3 to 22.7 hours (P < 0.001), but had no significant effect on its peak concentration (Cmax) compared with placebo. Gemfibrozil also increased the excretion of pioglitazone into urine and reduced the ratios of the active metabolites M-IV and M-III to pioglitazone in plasma and urine. Itraconazole had no significant effect on the pharmacokinetics of pioglitazone and did not alter the effect of gemfibrozil on pioglitazone pharmacokinetics. Rifampicin decreased the AUC of pioglitazone by 54% (P < 0.001) and shortened its dominant t½ from 4.9 to 2.3 hours (P < 0.001). No significant effect on Cmax was observed. Rifampicin also decreased the AUC of the metabolites M-IV and M-III, shortened their t½ and increased the ratios of the metabolite to pioglitazone in plasma and urine. Montelukast and zafirlukast did not affect the pharmacokinetics of pioglitazone. The pharmacokinetics of repaglinide remained unaffected by pioglitazone. These studies demonstrate the principal role of CYP2C8 in the metabolism of pioglitazone in humans. Gemfibrozil, an inhibitor of CYP2C8, increases and rifampicin, an inducer of CYP2C8 and other CYP enzymes, decreases the plasma concentrations of pioglitazone, which can necessitate blood glucose monitoring and adjustment of pioglitazone dosage. Montelukast and zafirlukast had no effects on the pharmacokinetics of pioglitazone, indicating that their inhibitory effect on CYP2C8 is negligible in vivo. Pioglitazone did not increase the plasma concentrations of repaglinide, indicating that its inhibitory effect on CYP2C8 and CYP3A4 is very weak in vivo.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
Introduction Repaglinide is a short-acting drug, used to reduce postprandial hyperglycaemia in type 2 diabetic patients. Repaglinide is extensively metabolised, and its oral bioavailability is about 60%; its metabolites are mainly excreted into bile. In previous studies, the cytochrome P450 (CYP) 3A4 inhibitors itraconazole and clarithromycin have moderately increased the area under the concentration-time curve (AUC) of repaglinide. Gemfibrozil, a CYP2C8 inhibitor, has greatly increased repaglinide AUC, enhancing and prolonging its blood glucose-lowering effect. Rifampicin has decreased the AUC and effects of repaglinide. Aims The aims of this work were to investigate the contribution of CYP2C8 and CYP3A4 to the metabolism of repaglinide, and to study other potential drug interactions affecting the pharmacokinetics of repaglinide, and the mechanisms of observed interactions. Methods The metabolism of repaglinide was studied in vitro using recombinant human CYP enzymes and pooled human liver microsomes (HLM). The effect of trimethoprim, cyclosporine, bezafibrate, fenofibrate, gemfibrozil, and rifampicin on the metabolism of repaglinide, and the effect of fibrates and rifampicin on the activity of CYP2C8 and CYP3A4 were investigated in vitro. Randomised, placebo-controlled cross-over studies were carried out in healthy human volunteers to investigate the effect of bezafibrate, fenofibrate, trimethoprim, cyclosporine, telithromycin, montelukast and pioglitazone on the pharmacokinetics and pharmacodynamics of repaglinide. Pretreatment with clinically relevant doses of the study drug or placebo was followed by a single dose of repaglinide, after which blood and urine samples were collected to determine pharmacokinetic and pharmacodynamic parameters. Results In vitro, the contribution of CYP2C8 was similar to that of CYP3A4 in the metabolism of repaglinide (< 2 μM). Bezafibrate, fenofibrate, gemfibrozil, and rifampicin moderately inhibited CYP2C8 and repaglinide metabolism, but only rifampicin inhibited CYP3A4 in vitro. Bezafibrate, fenofibrate, montelukast, and pioglitazone had no effect on the pharmacokinetics and pharmacodynamics of repaglinide in vivo. The CYP2C8 inhibitor trimethoprim inhibited repaglinide metabolism by HLM in vitro and increased repaglinide AUC by 61% in vivo (P < .001). The CYP3A4 inhibitor telithromycin increased repaglinide AUC 1.8-fold (P < .001) and enhanced its blood glucose-lowering effect in vivo. Cyclosporine inhibited the CYP3A4-mediated (but not CYP2C8-mediated) metabolism of repaglinide in vitro and increased repaglinide AUC 2.4-fold in vivo (P < .001). The effect of cyclosporine on repaglinide AUC in vivo correlated with the SLCO1B1 (encoding organic anion transporting polypeptide 1, OATP1B1) genotype. Conclusions The relative contributions of CYP2C8 and CYP3A4 to the metabolism of repaglinide are similar in vitro, when therapeutic repaglinide concentrations are used. In vivo, repaglinide AUC was considerably increased by inhibition of both CYP2C8 (by trimethoprim) and CYP3A4 (by telithromycin). Cyclosporine raised repaglinide AUC even higher, probably by inhibiting the CYP3A4-mediated biotransformation and OATP1B1-mediated hepatic uptake of repaglinide. Bezafibrate, fenofibrate, montelukast, and pioglitazone had no effect on the pharmacokinetics of repaglinide, suggesting that they do not significantly inhibit CYP2C8 or CYP3A4 in vivo. Coadministration of drugs that inhibit CYP2C8, CYP3A4 or OATP1B1 may increase the plasma concentrations and blood glucose-lowering effect of repaglinide, requiring closer monitoring of blood glucose concentrations to avoid hypoglycaemia, and adjustment of repaglinide dosage as necessary.
Resumo:
Cervical cancer develops through precursor lesions, i.e. cervical intraepithelialneoplasms (CIN). These can be detected and treated before progression to invasive cancer. The major risk factor for developing cervical cancer or CIN is persistent or recurrent infection with high-risk human papilloma virus (hrHPV). Other associated risk factors include low socioeconomic status, smoking, sexually transmitted infections, and high number of sexual partners, and these risk factors can predispose to some other cancers, excess mortality, and reproductive health complications as well. The aim was to study long-term cancer incidence, mortality, and reproductive health outcomes among women treated for CIN. Based on the results, we could evaluate the efficacy and safety of CIN treatment practices and estimate the role of the risk factors of CIN patients for cancer incidence, mortality, and reproductive health. We collected a cohort of 7 599 women treated for CIN at Helsinki University Central Hospital from 1974 to 2001. Information about their cancer incidence, cause of death, birth of children and other reproductive endpoints, and socio-economic status were gathered through registerlinkages to the Finnish Cancer Registry, Finnish Population Registry, and Statistics Finland. Depending on the endpoints in question, the women treated were compared to the general population, to themselves, or to an age- and municipality-matched reference cohort. Cervical cancer incidence was increased after treatment of CIN for at least 20 years, regardless of the grade of histology at treatment. Compared to all of the colposcopically guided methods, cold knife conization (CKC) was the least effective method of treatment in terms of later CIN 3 or cervical cancer incidence. In addition to cervical cancer, incidence of other HPV-related anogenital cancers was increased among those treated, as was the incidence of lung cancer and other smoking-related cancers. Mortality from cervical cancer among the women treated was not statistically significantly elevated, and after adjustment for socio-economic status, the hazard ratio (HR) was 1.0. In fact, the excess mortality among those treated was mainly due to increased mortality from other cancers, especially from lung cancer. In terms of post-treatment fertility, the CIN treatments seem to be safe: The women had more deliveries, and their incidence of pregnancy was similar before and after treatment. Incidence of extra-uterine pregnancies and induced abortions was elevated among the treated both before and after treatment. Thus this elevation did not occur because they were treated rather to a great extent was due to the other known risk factors these women had in excess, i.e. sexually transmitted infections. The purpose of any cancer preventive activity is to reduce cancer incidence and mortality. In Finland, cervical cancer is a rare disease and death from it even rarer, mostly due to the effective screening program. Despite this, the women treated are at increased risk for cancer; not just for cervical cancer. They must be followed up carefully and for a long period of time; general health education, especially cessation of smoking, is crucial in the management process, as well as interventions towards proper use of birth control such as condoms.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Cyclosporine is an immunosuppressant drug with a narrow therapeutic index and large variability in pharmacokinetics. To improve cyclosporine dose individualization in children, we used population pharmacokinetic modeling to study the effects of developmental, clinical, and genetic factors on cyclosporine pharmacokinetics in altogether 176 subjects (age range: 0.36–20.2 years) before and up to 16 years after renal transplantation. Pre-transplantation test doses of cyclosporine were given intravenously (3 mg/kg) and orally (10 mg/kg), on separate occasions, followed by blood sampling for 24 hours (n=175). After transplantation, in a total of 137 patients, cyclosporine concentration was quantified at trough, two hours post-dose, or with dose-interval curves. One-hundred-four of the studied patients were genotyped for 17 putatively functionally significant sequence variations in the ABCB1, SLCO1B1, ABCC2, CYP3A4, CYP3A5, and NR1I2 genes. Pharmacokinetic modeling was performed with the nonlinear mixed effects modeling computer program, NONMEM. A 3-compartment population pharmacokinetic model with first order absorption without lag-time was used to describe the data. The most important covariate affecting systemic clearance and distribution volume was allometrically scaled body weight i.e. body weight**3/4 for clearance and absolute body weight for volume of distribution. The clearance adjusted by absolute body weight declined with age and pre-pubertal children (< 8 years) had an approximately 25% higher clearance/body weight (L/h/kg) than did older children. Adjustment of clearance for allometric body weight removed its relationship to age after the first year of life. This finding is consistent with a gradual reduction in relative liver size towards adult values, and a relatively constant CYP3A content in the liver from about 6–12 months of age to adulthood. The other significant covariates affecting cyclosporine clearance and volume of distribution were hematocrit, plasma cholesterol, and serum creatinine, explaining up to 20%–30% of inter-individual differences before transplantation. After transplantation, their predictive role was smaller, as the variations in hematocrit, plasma cholesterol, and serum creatinine were also smaller. Before transplantation, no clinical or demographic covariates were found to affect oral bioavailability, and no systematic age-related changes in oral bioavailability were observed. After transplantation, older children receiving cyclosporine twice daily as the gelatine capsule microemulsion formulation had an about 1.25–1.3 times higher bioavailability than did the younger children receiving the liquid microemulsion formulation thrice daily. Moreover, cyclosporine oral bioavailability increased over 1.5-fold in the first month after transplantation, returning thereafter gradually to its initial value in 1–1.5 years. The largest cyclosporine doses were administered in the first 3–6 months after transplantation, and thereafter the single doses of cyclosporine were often smaller than 3 mg/kg. Thus, the results suggest that cyclosporine displays dose-dependent, saturable pre-systemic metabolism even at low single doses, whereas complete saturation of CYP3A4 and MDR1 (P-glycoprotein) renders cyclosporine pharmacokinetics dose-linear at higher doses. No significant associations were found between genetic polymorphisms and cyclosporine pharmacokinetics before transplantation in the whole population for which genetic data was available (n=104). However, in children older than eight years (n=22), heterozygous and homozygous carriers of the ABCB1 c.2677T or c.1236T alleles had an about 1.3 times or 1.6 times higher oral bioavailability, respectively, than did non-carriers. After transplantation, none of the ABCB1 SNPs or any other SNPs were found to be associated with cyclosporine clearance or oral bioavailability in the whole population, in the patients older than eight years, or in the patients younger than eight years. In the whole population, in those patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055C haplotype, however, the bioavailability of cyclosporine was about one tenth lower, per allele, than in non-carriers. This effect was significant also in a subgroup of patients older than eight years. Furthermore, in patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055T haplotype, the bioavailability was almost one fifth higher, per allele, than in non-carriers. It may be possible to improve individualization of cyclosporine dosing in children by accounting for the effects of developmental factors (body weight, liver size), time after transplantation, and cyclosporine dosing frequency/formulation. Further studies are required on the predictive value of genotyping for individualization of cyclosporine dosing in children.