873 resultados para Dyadic Adjustment
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
Pioglitazone is a thiazolidinedione compound used in the treatment of type 2 diabetes. It has been reported to be metabolised by multiple cytochrome P450 (CYP) enzymes, including CYP2C8, CYP2C9 and CYP3A4 in vitro. The aims of this work were to identify the CYP enzymes mainly responsible for the elimination of pioglitazone in order to evaluate its potential for in vivo drug interactions, and to investigate the effects of CYP2C8- and CYP3A4-inhibiting drugs (gemfibrozil, montelukast, zafirlukast and itraconazole) on the pharmacokinetics of pioglitazone in healthy volunteers. In addition, the effect of induction of CYP enzymes on the pharmacokinetics of pioglitazone in healthy volunteers was investigated, with rifampicin as a model inducer. Finally, the effect of pioglitazone on CYP2C8 and CYP3A enzyme activity was examined in healthy volunteers using repaglinide as a model substrate. Study I was conducted in vitro using pooled human liver microsomes (HLM) and human recombinant CYP isoforms. Studies II to V were randomised, placebo-controlled cross-over studies with 2-4 phases each. A total of 10-12 healthy volunteers participated in each study. Pretreatment with clinically relevant doses with the inhibitor or inducer was followed by a single dose of pioglitazone or repaglinide, whereafter blood and urine samples were collected for the determination of drug concentrations. In vitro, the elimination of pioglitazone (1 µM) by HLM was markedly inhibited, in particular by CYP2C8 inhibitors, but also by CYP3A4 inhibitors. Of the recombinant CYP isoforms, CYP2C8 metabolised pioglitazone markedly, and CYP3A4 also had a significant effect. All of the tested CYP2C8 inhibitors (montelukast, zafirlukast, trimethoprim and gemfibrozil) concentration-dependently inhibited pioglitazone metabolism in HLM. In humans, gemfibrozil raised the area under the plasma concentration-time curve (AUC) of pioglitazone 3.2-fold (P < 0.001) and prolonged its elimination half-life (t½) from 8.3 to 22.7 hours (P < 0.001), but had no significant effect on its peak concentration (Cmax) compared with placebo. Gemfibrozil also increased the excretion of pioglitazone into urine and reduced the ratios of the active metabolites M-IV and M-III to pioglitazone in plasma and urine. Itraconazole had no significant effect on the pharmacokinetics of pioglitazone and did not alter the effect of gemfibrozil on pioglitazone pharmacokinetics. Rifampicin decreased the AUC of pioglitazone by 54% (P < 0.001) and shortened its dominant t½ from 4.9 to 2.3 hours (P < 0.001). No significant effect on Cmax was observed. Rifampicin also decreased the AUC of the metabolites M-IV and M-III, shortened their t½ and increased the ratios of the metabolite to pioglitazone in plasma and urine. Montelukast and zafirlukast did not affect the pharmacokinetics of pioglitazone. The pharmacokinetics of repaglinide remained unaffected by pioglitazone. These studies demonstrate the principal role of CYP2C8 in the metabolism of pioglitazone in humans. Gemfibrozil, an inhibitor of CYP2C8, increases and rifampicin, an inducer of CYP2C8 and other CYP enzymes, decreases the plasma concentrations of pioglitazone, which can necessitate blood glucose monitoring and adjustment of pioglitazone dosage. Montelukast and zafirlukast had no effects on the pharmacokinetics of pioglitazone, indicating that their inhibitory effect on CYP2C8 is negligible in vivo. Pioglitazone did not increase the plasma concentrations of repaglinide, indicating that its inhibitory effect on CYP2C8 and CYP3A4 is very weak in vivo.
Resumo:
Topoisomerase II (topo II) is a dyadic enzyme found in all eukaryotic cells. Topo II is involved in a number of cellular processes related to DNA metabolism, including DNA replication, recombination and the maintenance of genomic stability. We discovered a correlation between the development of postnatal testis and increased binding of topo IIalpha to the chromatin fraction. We used this observation to characterize DNA-binding specificity and catalytic properties of purified testis topo IIalpha. The results indicate that topo IIalpha binds a substrate containing the preferred site with greater affinity and, consequently, catalyzes the conversion of form I to form IV DNA more efficiently in contrast to substrates lacking such a site. Interestingly, topo IIalpha displayed high-affinity and cooperativity in binding to the scaffold associated region. In contrast to the preferred site, however, high-affinity binding of topo IIalpha to the scaffold-associated region failed to result in enhanced catalytic activity. Intriguingly, competition assays involving scaffold-associated region revealed an additional DNA-binding site within the dyadic topo IIalpha. These results implicate a dual role for topo IIalpha in vivo consistent with the notion that its sequestration to the chromatin might play a role in chromosome condensation and decondensation during spermatogenesis.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
We present a signal processing approach using discrete wavelet transform (DWT) for the generation of complex synthetic aperture radar (SAR) images at an arbitrary number of dyadic scales of resolution. The method is computationally efficient and is free from significant system-imposed limitations present in traditional subaperture-based multiresolution image formation. Problems due to aliasing associated with biorthogonal decomposition of the complex signals are addressed. The lifting scheme of DWT is adapted to handle complex signal approximations and employed to further enhance the computational efficiency. Multiresolution SAR images formed by the proposed method are presented.
Resumo:
Introduction Repaglinide is a short-acting drug, used to reduce postprandial hyperglycaemia in type 2 diabetic patients. Repaglinide is extensively metabolised, and its oral bioavailability is about 60%; its metabolites are mainly excreted into bile. In previous studies, the cytochrome P450 (CYP) 3A4 inhibitors itraconazole and clarithromycin have moderately increased the area under the concentration-time curve (AUC) of repaglinide. Gemfibrozil, a CYP2C8 inhibitor, has greatly increased repaglinide AUC, enhancing and prolonging its blood glucose-lowering effect. Rifampicin has decreased the AUC and effects of repaglinide. Aims The aims of this work were to investigate the contribution of CYP2C8 and CYP3A4 to the metabolism of repaglinide, and to study other potential drug interactions affecting the pharmacokinetics of repaglinide, and the mechanisms of observed interactions. Methods The metabolism of repaglinide was studied in vitro using recombinant human CYP enzymes and pooled human liver microsomes (HLM). The effect of trimethoprim, cyclosporine, bezafibrate, fenofibrate, gemfibrozil, and rifampicin on the metabolism of repaglinide, and the effect of fibrates and rifampicin on the activity of CYP2C8 and CYP3A4 were investigated in vitro. Randomised, placebo-controlled cross-over studies were carried out in healthy human volunteers to investigate the effect of bezafibrate, fenofibrate, trimethoprim, cyclosporine, telithromycin, montelukast and pioglitazone on the pharmacokinetics and pharmacodynamics of repaglinide. Pretreatment with clinically relevant doses of the study drug or placebo was followed by a single dose of repaglinide, after which blood and urine samples were collected to determine pharmacokinetic and pharmacodynamic parameters. Results In vitro, the contribution of CYP2C8 was similar to that of CYP3A4 in the metabolism of repaglinide (< 2 μM). Bezafibrate, fenofibrate, gemfibrozil, and rifampicin moderately inhibited CYP2C8 and repaglinide metabolism, but only rifampicin inhibited CYP3A4 in vitro. Bezafibrate, fenofibrate, montelukast, and pioglitazone had no effect on the pharmacokinetics and pharmacodynamics of repaglinide in vivo. The CYP2C8 inhibitor trimethoprim inhibited repaglinide metabolism by HLM in vitro and increased repaglinide AUC by 61% in vivo (P < .001). The CYP3A4 inhibitor telithromycin increased repaglinide AUC 1.8-fold (P < .001) and enhanced its blood glucose-lowering effect in vivo. Cyclosporine inhibited the CYP3A4-mediated (but not CYP2C8-mediated) metabolism of repaglinide in vitro and increased repaglinide AUC 2.4-fold in vivo (P < .001). The effect of cyclosporine on repaglinide AUC in vivo correlated with the SLCO1B1 (encoding organic anion transporting polypeptide 1, OATP1B1) genotype. Conclusions The relative contributions of CYP2C8 and CYP3A4 to the metabolism of repaglinide are similar in vitro, when therapeutic repaglinide concentrations are used. In vivo, repaglinide AUC was considerably increased by inhibition of both CYP2C8 (by trimethoprim) and CYP3A4 (by telithromycin). Cyclosporine raised repaglinide AUC even higher, probably by inhibiting the CYP3A4-mediated biotransformation and OATP1B1-mediated hepatic uptake of repaglinide. Bezafibrate, fenofibrate, montelukast, and pioglitazone had no effect on the pharmacokinetics of repaglinide, suggesting that they do not significantly inhibit CYP2C8 or CYP3A4 in vivo. Coadministration of drugs that inhibit CYP2C8, CYP3A4 or OATP1B1 may increase the plasma concentrations and blood glucose-lowering effect of repaglinide, requiring closer monitoring of blood glucose concentrations to avoid hypoglycaemia, and adjustment of repaglinide dosage as necessary.
Resumo:
Cervical cancer develops through precursor lesions, i.e. cervical intraepithelialneoplasms (CIN). These can be detected and treated before progression to invasive cancer. The major risk factor for developing cervical cancer or CIN is persistent or recurrent infection with high-risk human papilloma virus (hrHPV). Other associated risk factors include low socioeconomic status, smoking, sexually transmitted infections, and high number of sexual partners, and these risk factors can predispose to some other cancers, excess mortality, and reproductive health complications as well. The aim was to study long-term cancer incidence, mortality, and reproductive health outcomes among women treated for CIN. Based on the results, we could evaluate the efficacy and safety of CIN treatment practices and estimate the role of the risk factors of CIN patients for cancer incidence, mortality, and reproductive health. We collected a cohort of 7 599 women treated for CIN at Helsinki University Central Hospital from 1974 to 2001. Information about their cancer incidence, cause of death, birth of children and other reproductive endpoints, and socio-economic status were gathered through registerlinkages to the Finnish Cancer Registry, Finnish Population Registry, and Statistics Finland. Depending on the endpoints in question, the women treated were compared to the general population, to themselves, or to an age- and municipality-matched reference cohort. Cervical cancer incidence was increased after treatment of CIN for at least 20 years, regardless of the grade of histology at treatment. Compared to all of the colposcopically guided methods, cold knife conization (CKC) was the least effective method of treatment in terms of later CIN 3 or cervical cancer incidence. In addition to cervical cancer, incidence of other HPV-related anogenital cancers was increased among those treated, as was the incidence of lung cancer and other smoking-related cancers. Mortality from cervical cancer among the women treated was not statistically significantly elevated, and after adjustment for socio-economic status, the hazard ratio (HR) was 1.0. In fact, the excess mortality among those treated was mainly due to increased mortality from other cancers, especially from lung cancer. In terms of post-treatment fertility, the CIN treatments seem to be safe: The women had more deliveries, and their incidence of pregnancy was similar before and after treatment. Incidence of extra-uterine pregnancies and induced abortions was elevated among the treated both before and after treatment. Thus this elevation did not occur because they were treated rather to a great extent was due to the other known risk factors these women had in excess, i.e. sexually transmitted infections. The purpose of any cancer preventive activity is to reduce cancer incidence and mortality. In Finland, cervical cancer is a rare disease and death from it even rarer, mostly due to the effective screening program. Despite this, the women treated are at increased risk for cancer; not just for cervical cancer. They must be followed up carefully and for a long period of time; general health education, especially cessation of smoking, is crucial in the management process, as well as interventions towards proper use of birth control such as condoms.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Infertility treatments are relatively easily available in most Western countries today, but the psychological consequences of these high-tech treatments have scarcely been addressed. The purpose of this controlled longitudinal study was to explore the early environment of the infant born by assisted reproductive treatment (ART). We focused on the parents mental well-being, marital relations and experience of parenting. In addition to this, we assessed parent child interaction and parents mental representations of their child after long-standing infertility and several unsuccessful ART attempts. The subjects were infertile couples who achieved a singleton pregnancy by in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI). The control group comprised of spontaneously conceiving couples with singleton pregnancies. ART women showed fewer depressive symptoms than controls during pregnancy and after delivery, but the difference vanished by the end of the child s first year. ART men consistently had lower levels of anxiety symptoms, sleeping difficulties, and social dysfunction than control men. Control women experienced a decrease in dyadic consensus during the child s first year, which did not happen among ART women. After the child was born, ART men reported a higher level of sexual affection compared with control men. Psychic symptoms and stressful life events were differently related to marital relations in ART and control groups. The parenting experiences of ART mothers were in general at a higher level, compared with controls, and they changed in a positive direction during the child s first year. Fathering experiences were at the same level in both groups, and they changed positively in both groups by the end of the child s first year. The parenting experiences of ART mothers and fathers were more resilient to certain child-related stressors than those of control group. Both mothers and fathers with long-term infertility showed more sensitive behaviour with their child in toddler-age than in infancy. Correspondingly, children s cooperation increased. Mothers often mentioned a fear of miscarriage and difficulty in creating representations of the child during pregnancy. Descriptions of the infants were mainly rich, vivid and loaded with positive features. In conclusion, ART parents in general seem to adapt well to the transition to parenthood. Former infertility and ART do not seem to constitute a risk for parents mental health, marital relations or experience of parenting. Even longstanding infertility with several unsuccessful treatment attempts did not create a risk as regards parenting behaviour or parents mental representations of their child. In this group, however, women were found to have fear for losing the child and difficulty in creating representations of the child during pregnancy, which in some cases may indicate need for psychosocial support. Even though our results are encouraging, infertility and infertility treatments are generally considered as a stressful experience. It is a challenge for health authorities to recognize those couples who need professional help to overcome the distressing experiences of infertility and ART.
Resumo:
Cyclosporine is an immunosuppressant drug with a narrow therapeutic index and large variability in pharmacokinetics. To improve cyclosporine dose individualization in children, we used population pharmacokinetic modeling to study the effects of developmental, clinical, and genetic factors on cyclosporine pharmacokinetics in altogether 176 subjects (age range: 0.36–20.2 years) before and up to 16 years after renal transplantation. Pre-transplantation test doses of cyclosporine were given intravenously (3 mg/kg) and orally (10 mg/kg), on separate occasions, followed by blood sampling for 24 hours (n=175). After transplantation, in a total of 137 patients, cyclosporine concentration was quantified at trough, two hours post-dose, or with dose-interval curves. One-hundred-four of the studied patients were genotyped for 17 putatively functionally significant sequence variations in the ABCB1, SLCO1B1, ABCC2, CYP3A4, CYP3A5, and NR1I2 genes. Pharmacokinetic modeling was performed with the nonlinear mixed effects modeling computer program, NONMEM. A 3-compartment population pharmacokinetic model with first order absorption without lag-time was used to describe the data. The most important covariate affecting systemic clearance and distribution volume was allometrically scaled body weight i.e. body weight**3/4 for clearance and absolute body weight for volume of distribution. The clearance adjusted by absolute body weight declined with age and pre-pubertal children (< 8 years) had an approximately 25% higher clearance/body weight (L/h/kg) than did older children. Adjustment of clearance for allometric body weight removed its relationship to age after the first year of life. This finding is consistent with a gradual reduction in relative liver size towards adult values, and a relatively constant CYP3A content in the liver from about 6–12 months of age to adulthood. The other significant covariates affecting cyclosporine clearance and volume of distribution were hematocrit, plasma cholesterol, and serum creatinine, explaining up to 20%–30% of inter-individual differences before transplantation. After transplantation, their predictive role was smaller, as the variations in hematocrit, plasma cholesterol, and serum creatinine were also smaller. Before transplantation, no clinical or demographic covariates were found to affect oral bioavailability, and no systematic age-related changes in oral bioavailability were observed. After transplantation, older children receiving cyclosporine twice daily as the gelatine capsule microemulsion formulation had an about 1.25–1.3 times higher bioavailability than did the younger children receiving the liquid microemulsion formulation thrice daily. Moreover, cyclosporine oral bioavailability increased over 1.5-fold in the first month after transplantation, returning thereafter gradually to its initial value in 1–1.5 years. The largest cyclosporine doses were administered in the first 3–6 months after transplantation, and thereafter the single doses of cyclosporine were often smaller than 3 mg/kg. Thus, the results suggest that cyclosporine displays dose-dependent, saturable pre-systemic metabolism even at low single doses, whereas complete saturation of CYP3A4 and MDR1 (P-glycoprotein) renders cyclosporine pharmacokinetics dose-linear at higher doses. No significant associations were found between genetic polymorphisms and cyclosporine pharmacokinetics before transplantation in the whole population for which genetic data was available (n=104). However, in children older than eight years (n=22), heterozygous and homozygous carriers of the ABCB1 c.2677T or c.1236T alleles had an about 1.3 times or 1.6 times higher oral bioavailability, respectively, than did non-carriers. After transplantation, none of the ABCB1 SNPs or any other SNPs were found to be associated with cyclosporine clearance or oral bioavailability in the whole population, in the patients older than eight years, or in the patients younger than eight years. In the whole population, in those patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055C haplotype, however, the bioavailability of cyclosporine was about one tenth lower, per allele, than in non-carriers. This effect was significant also in a subgroup of patients older than eight years. Furthermore, in patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055T haplotype, the bioavailability was almost one fifth higher, per allele, than in non-carriers. It may be possible to improve individualization of cyclosporine dosing in children by accounting for the effects of developmental factors (body weight, liver size), time after transplantation, and cyclosporine dosing frequency/formulation. Further studies are required on the predictive value of genotyping for individualization of cyclosporine dosing in children.
Resumo:
Airway inflammation is a key feature of bronchial asthma. In asthma management, according to international guidelines, the gold standard is anti-inflammatory treatment. Currently, only conventional procedures (i.e., symptoms, use of rescue medication, PEF-variability, and lung function tests) were used to both diagnose and evaluate the results of treatment with anti-inflammatory drugs. New methods for evaluation of degree of airway inflammation are required. Nitric oxide (NO) is a gas which is produced in the airways of healthy subjects and especially produced in asthmatic airways. Measurement of NO from the airways is possible, and NO can be measured from exhaled air. Fractional exhaled NO (FENO) is increased in asthma, and the highest concentrations are measured in asthmatic patients not treated with inhaled corticosteroids (ICS). Steroid-treated patients with asthma had levels of FENO similar to those of healthy controls. Atopic asthmatics had higher levels of FENO than did nonatopic asthmatics, indicating that level of atopy affected FENO level. Associations between FENO and bronchial hyperresponsiveness (BHR) occur in asthma. The present study demonstrated that measurement of FENO had good reproducibility, and the FENO variability was reasonable both short- and long-term in both healthy subjects and patients with respiratory symptoms or asthma. We demonstrated the upper normal limit for healthy subjects, which was 12 ppb calculated from two different healthy study populations. We showed that patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma had FENO values significantly higher than in healthy subjects, but significantly lower than in asthma patients. These findings suggest that BHR to histamine is a sensitive indicator of the effect of ICS and a valuable tool for adjustment of corticosteroid treatment in mild asthma. The findings further suggest that intermittent treatment periods of a few weeks’ duration are insufficient to provide long-term control of BHR in patients with mild persistent asthma. Moreover, during the treatment with ICS changes in BHR and changes in FENO were associated. FENO level was associated with BHR measured by a direct (histamine challenge) or indirect method (exercise challenge) in steroid-naïve symptomatic, non-smoking asthmatics. Although these associations could be found only in atopics, FENO level in nonatopic asthma was also increased. It can thus be concluded that assessment of airway inflammation by measuring FENO can be useful for clinical purposes. The methodology of FENO measurements is now validated. Especially in those patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma, FENO measurement can aid in treatment decisions. Serial measurement of FENO during treatment with ICS can be a complementary or an alternative method for evaluation in patients with asthma.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
The aim of this study was to estimate the development of fertility in North-Central Namibia, former Ovamboland, from 1960 to 2001. Special attention was given to the onset of fertility decline and to the impact of the HIV epidemic on fertility. An additional aim was to introduce parish registers as a source of data for fertility research in Africa. Data used consisted of parish registers from Evangelical Lutheran congregations, the 1991 and 2001 Population and Housing Censuses, the 1992 and 2000 Namibia Demographic and Health Surveys, and the HIV sentinel surveillances of 1992-2004. Both period and cohort fertility were analysed. The P/F ratio method was used when analysing census data. The impact of HIV infection on fertility was estimated indirectly by comparing the fertility histories of women who died at an age of less than 50 years with the fertility of other women. The impact of the HIV epidemic on fertility was assessed both among infected women and in the general population. Fertility in the study population began to decline in 1980. The decline was rapid during the 1980s, levelled off in the early 1990s at the end of war of independence and then continued to decline until the end of the study period. According to parish registers, total fertility was 6.4 in the 1960s and 6.5 in the 1970s, and declined to 5.1 in the 1980s and 4.2 in the 1990s. Adjustment of these total fertility rates to correspond to levels of fertility based on data from the 1991 and 2001 censuses resulted in total fertility declining from 7.6 in 1960-79 to 6.0 in 1980-89, and to 4.9 in 1990-99. The decline was associated with increased age at first marriage, declining marital fertility and increasing premarital fertility. Fertility among adolescents increased, whereas the fertility of women in all other age groups declined. During the 1980s, the war of independence contributed to declining fertility through spousal separation and delayed marriages. Contraception has been employed in the study region since the 1980s, but in the early 1990s, use of contraceptives was still so limited that fertility was higher in North-Central Namibia than in other regions of the country. In the 1990s, fertility decline was largely a result of the increased prevalence of contraception. HIV prevalence among pregnant women increased from 4% in 1992 to 25% in 2001. In 2001, total fertility among HIV-infected women (3.7) was lower than that among other women (4.8), resulting in total fertility of 4.4 among the general population in 2001. The HIV epidemic explained more than a quarter of the decline in total fertility at population level during most of the 1990s. The HIV epidemic also reduced the number of children born by reducing the number of potential mothers. In the future, HIV will have an extensive influence on both the size and age structure of the Namibian population. Although HIV influences demographic development through both fertility and mortality, the effect through changes in fertility will be smaller than the effect through mortality. In the study region, as in some other regions of southern Africa, a new type of demographic transition is under way, one in which population growth stagnates or even reverses because of the combined effects of declining fertility and increasing mortality, both of which are consequences of the HIV pandemic.