20 resultados para 10131027 TM-43


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Type 1 diabetes (T1D) is a common, multifactorial disease with strong familial clustering. In Finland, the incidence of T1D among children aged 14 years or under is the highest in the world. The increase in incidence has been approximately 2.4% per year. Although most new T1D cases are sporadic the first-degree relatives are at an increased risk of developing the same disease. This study was designed to examine the familial aggregation of T1D and one of its serious complications, diabetic nephropathy (DN). More specifically the study aimed (1) to determine the concordance rates of T1D in monozygotic (MZ) and dizygotic (DZ) twins and to estimate the relative contributions of genetic and environmental factors to the variability in liability to T1D as well as to study the age at onset of diabetes in twins; (2) to obtain long-term empirical estimates of the risk of T1D among siblings of T1D patients and the factors related to this risk, especially the effect of age at onset of diabetes in the proband and the birth cohort effect; (3) to establish if DN is aggregating in a Finnish population-based cohort of families with multiple cases of T1D, and to assess its magnitude and particularly to find out whether the risk of DN in siblings is varying according to the severity of DN in the proband and/or the age at onset of T1D: (4) to assess the recurrence risk of T1D in the offspring of a Finnish population-based cohort of patients with childhood onset T1D, and to investigate potential sex-related effects in the transmission of T1D from the diabetic parents to their offspring as well as to study whether there is a temporal trend in the incidence. The study population comprised of the Finnish Young Twin Cohort (22,650 twin pairs), a population-based cohort of patients with T1D diagnosed at the age of 17 years or earlier between 1965 and 1979 (n=5,144) and all their siblings (n=10,168) and offspring (n=5,291). A polygenic, multifactorial liability model was fitted to the twin data. Kaplan-Meier analyses were used to provide the cumulative incidence for the development of T1D and DN. Cox s proportional hazards models were fitted to the data. Poisson regression analysis was used to evaluate temporal trends in incidence. Standardized incidence ratios (SIRs) between the first-degree relatives of T1D patients and background population were determined. The twin study showed that the vast majority of affected MZ twin pairs remained discordant. Pairwise concordance for T1D was 27.3% in MZ and 3.8% in DZ twins. The probandwise concordance estimates were 42.9% and 7.4%, respectively. The model with additive genetic and individual environmental effects was the best-fitting liability model to T1D, with 88% of the phenotypic variance due to genetic factors. The second paper showed that the 50-year cumulative incidence of T1D in the siblings of diabetic probands was 6.9%. A young age at diagnosis in the probands considerably increased the risk. If the proband was diagnosed at the age of 0-4, 5-9, 10-14, 15 or more, the corresponding 40-year cumulative risks were 13.2%, 7.8%, 4.7% and 3.4%. The cumulative incidence increased with increasing birth year. However, SIR among children aged 14 years or under was approximately 12 throughout the follow-up. The third paper showed that diabetic siblings of the probands with nephropathy had a 2.3 times higher risk of DN compared with siblings of probands free of nephropathy. The presence of end stage renal disease (ESRD) in the proband increases the risk three-fold for diabetic siblings. Being diagnosed with diabetes during puberty (10-14) or a few years before (5-9) increased the susceptibility for DN in the siblings. The fourth paper revealed that of the offspring of male probands, 7.8% were affected by the age of 20 compared with 5.3% of the offspring of female probands. Offspring of fathers with T1D have 1.7 times greater risk to be affected with T1D than the offspring of mothers with T1D. The excess risk in the offspring of male fathers manifested itself through the higher risk the younger the father was when diagnosed with T1D. Young age at onset of diabetes in fathers increased the risk of T1D greatly in the offspring, but no such pattern was seen in the offspring of diabetic mothers. The SIR among offspring aged 14 years or under remained fairly constant throughout the follow-up, approximately 10. The present study has provided new knowledge on T1D recurrence risk in the first-degree relatives and the risk factors modifying the risk. Twin data demonstrated high genetic liability for T1D and increased heritability. The vast majority of affected MZ twin pairs, however, remain discordant for T1D. This study confirmed the drastic impact of the young age at onset of diabetes in the probands on the increased risk of T1D in the first-degree relatives. The only exception was the absence of this pattern in the offspring of T1D mothers. Both the sibling and the offspring recurrence risk studies revealed dynamic changes in the cumulative incidence of T1D in the first-degree relatives. SIRs among the first-degree relatives of T1D patients seems to remain fairly constant. The study demonstrates that the penetrance of the susceptibility genes for T1D may be low, although strongly influenced by the environmental factors. Presence of familial aggregation of DN was confirmed for the first time in a population-based study. Although the majority of the sibling pairs with T1D were discordant for DN, its presence in one sibling doubles and presence of ESRD triples the risk of DN in the other diabetic sibling. An encouraging observation was that although the proportion of children to be diagnosed with T1D at the age of 4 or under is increasing, they seem to have a decreased risk of DN or at least delayed onset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prescribing for older patients is challenging. The prevalence of diseases increases with advancing age and causes extensive drug use. Impairments in cognitive, sensory, social and physical functioning, multimorbidity and comorbidities, as well as age-related changes in pharmacokinetics and pharmacodynamics all add to the complexity of prescribing. This study is a cross-sectional assessment of all long-term residents aged ≥ 65 years in all nursing homes in Helsinki, Finland. The residents’ health status was assessed and data on their demographic factors, health and medications were collected from their medical records in February 2003. This study assesses some essential issues in prescribing for older people: psychotropic drugs (Paper I), laxatives (Paper II), vitamin D and calcium supplements (Paper III), potentially inappropriate drugs for older adults (PIDs) and drug-drug interactions (DDIs)(Paper IV), as well as prescribing in public and private nursing homes. A resident was classified as a medication user if his or her medication record indicated a regular sequence for its dosage. Others were classified as non-users. Mini Nutritional Assessment (MNA) was used to assess residents’ nutritional status, Beers 2003 criteria to assess the use of PIDs, and the Swedish, Finnish, INteraction X-referencing database (SFINX) to evaluate their exposure to DDIs. Of all nursing home residents in Helsinki, 82% (n=1987) participated in studies I, II, and IV and 87% (n=2114) participated in the study III. The residents’ mean age was 84 years, 81% were female, and 70% were diagnosed with dementia. The mean number of drugs was 7.9 per resident; 40% of the residents used ≥ 9 drugs per day, and were thus exposed to polypharmacy. Eighty percent of the residents received psychotropics; 43% received antipsychotics, and 45% used antidepressants. Anxiolytics were prescribed to 26%, and hypnotics to 28% of the residents. Of those residents diagnosed with dementia, 11% received antidementia drugs. Fifty five percent of the residents used laxatives regularly. In multivariate analysis, those factors associated with regular laxative use were advanced age, immobility, poor nutritional status, chewing problems, Parkinson’s disease, and a high number of drugs. Eating snacks between meals was associated with lower risk for laxative use. Of all participants, 33% received vitamin D supplementation, 28% received calcium supplementation, and 20% received both vitamin D and calcium. The dosage of vitamin D was rather low: 21% received vitamin D 400 IU (10 µg) or more, and only 4% received 800 IU (20 µg) or more. In multivariate analysis, residents who received vitamin D supplementation enjoyed better nutritional status, ate snacks between meals, suffered no constipation, and received regular weight monitoring. Those residents receiving PIDs (34% of all residents) more often used psychotropic medication and were more often exposed to polypharmacy than residents receiving no PIDs. Residents receiving PIDs were less often diagnosed with dementia than were residents receiving no PIDs. The three most prevalent PIDs were short-acting benzodiazepine in greater dosages than recommended, hydroxyzine, and nitrofurantoin. These three drugs accounted for nearly 77% of all PID use. Of all residents, less than 5% were susceptible to a clinically significant DDI. The most common DDIs were related to the use of potassium-sparing diuretics, carbamazepine, and codeine. Residents exposed to potential DDIs were younger, had more often suffered a previous stroke, more often used psychotropics, and were more often exposed to PIDs and polypharmacy than were residents not exposed to DDIs. Residents in private nursing homes were less often exposed to polypharmacy than were residents in public nursing homes. Long-term residents in nursing homes in Helsinki use, on average, nearly eight drugs daily. The use of psychotropic drugs in our study was notably more common than in international studies. The prevalence of laxatives equaled other prior international studies. Regardless of the known benefit and recommendation of vitamin D supplementation for elderly residing mostly indoors, the proportion of nursing home residents receiving vitamin D and calcium was surprisingly low. The use of PIDs was common among nursing home residents. PIDs increased the likelihood of DDIs. However, DDIs did not seem a major concern among the nursing home population. Monitoring PIDs and potential drug interactions could improve the quality of prescribing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hereditary leiomyomatosis and renal cell cancer (HLRCC) is a recently characterized cancer syndrome which predisposes to cutaneous and uterine leiomyomas as well as renal cell carcinoma (RCC). Uterine leiomyosarcoma (ULMS) has also been observed in certain Finnish HLRCC families. The predisposing gene for this syndrome, fumarate hydratase (FH), was identified in 2002. The well-known function of FH is in the tricarboxylic acid cycle (TCAC) in the energy metabolism of cells. As FH is a novel cancer gene, the role of FH mutations in tumours is in general unknown. Similarly, the mechanisms through which defective FH is associated with tumourigenesis are unclear. The loss of a wild type allele has been observed in virtually all HLRCC patients tumours and the FH enzyme activities are either totally lost or remarkably reduced in the tissues of mutation carrier patients. Therefore, FH is assumed to function as a tumour suppressor. Mutations in genes encoding subunits of other TCAC enzyme SDH have also been reported recently in tumours: mutations in SDHB, SDHC, and SDHD genes predispose to paraganglioma and pheochromocytoma. In the present study, mutations in the SDHB gene were observed to predispose to RCC. This was the first time that mutations in SDHB have been detected in extra-paraganglial tumours. Two different SDHB mutations were observed in two unrelated families. In the first family, the index patient was diagnosed with RCC at the age of 24 years. Additionally, his mother with a paraganglioma (PGL) of the heart and his maternal uncle with lung cancer were both carriers of the mutation. The RCC of the index patient and the PGL of his mother showed LOH. In the other family, an SDHB mutation was detected in two siblings who were both diagnosed with RCC at the ages of 24 and 26 years. Both of the siblings also suffered PGL. All these tumours showed LOH. Therefore, we concluded that mutations in SDHB predispose also for RCC in certain families. Several tumour types were analysed for FH mutations to define the role of FH mutations in these tumour types. In addition, patients with a putative cancer phenotype were analysed to identify new HLRCC families. Three FH variants were detected, of which two were novel. One of the variants was observed in a patient diagnosed with ULMS at the age of 41 years. However, LOH was not detected in the tumour tissue. The FH enzyme activity of the mutated protein was clearly reduced, being 43% of the activity of the normal protein. Together with the results from an earlier study we calculated that the prevalence of FH mutations in Finnish non-syndromic ULMS is around 2.4%. Therefore, FH mutations seem to have a minor role in the pathogenesis on non-syndromic ULMS. Two other germline variants were detected in a novel tumour type, ovarian mucinous cystadenoma. However, tumour tissues of the patients were not available for LOH studies and therefore LOH status remained unclear. Therefore, it is possible that FH mutations predispose also for ovarian tumours but further studies are needed to verify this result. A novel variant form of the FH gene (FHv) was identified and characterized in more detail. FHv contains an alternative first exon (1b), which appeared to function as 5 UTR sequence. The translation of FHv is initiated in vitro from exons two and three. The localization of FHv is both cytosolic and nuclear, in contrast to the localization of FH in mitochondria. FHv is expressed at low levels in all human tissues. Interestingly, the expression was induced after heat shock treatment and in chronic hypoxia. Therefore, FHv might have a role e.g. in the adaptation to unfavourable growth conditions. However, this remains to be elucidated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dietary habits have changed during the past decades towards an increasing consumption of processed foods, which has notably increased not only total dietary phosphorus (P) intake, but also intake of P from phosphate additives. While the intake of calcium (Ca) in many Western countries remains below recommended levels (800 mg/d), the usual daily P intake in a typical Western diet exceeds by 2- to 3-fold the dietary guidelines (600 mg/d). The effects of high P intake in healthy humans have been investigated seldom. In this thesis healthy 20- to 43-year-old women were studied. In the first controlled study (n = 14), we examined the effects of P doses, and in a cross-sectional study (n = 147) the associations of habitual P intakes with Ca and bone metabolism. In this same cross-sectional study, we also investigated whether differences exist between dietary P originating from natural P sources and phosphate additives. The second controlled study (n = 12) investigated whether by increasing the Ca intake, the effects of a high P intake could be reduced. The associations of habitual dietary calcium-to-phosphorus ratios (Ca:P ratio) with Ca and bone metabolism were determined in a cross-sectional study design (n = 147). In the controlled study, the oral intake of P doses (495, 745, 1245 and 1995 mg/d) with a low Ca intake (250 mg/d) increased serum parathyroid hormone (S-PTH) concentration in a dose-dependent manner. In addition, the highest P dose decreased serum ionized calcium (S-iCa) concentration and bone formation and increased bone resorption. In the second controlled study with a dietary P intake of 1850 mg/d, by increasing the Ca intake from 480 mg/d to 1080 mg/d and then to 1680 mg/d, the S-PTH concentration decreased, the S-iCa concentration increased and bone resorption decreased dose-dependently. However, not even the highest Ca intake could counteract the effect of high dietary P on bone formation, as indicated by unchanged bone formation activity. In the cross-sectional studies, a higher habitual dietary P intake (>1650 mg/d) was associated with lower S-iCa and higher S-PTH concentrations. The consumption of phosphate additive-containing foods was associated with a higher S-PTH concentration. Moreover, habitual low dietary Ca:P ratios (≤0.50, molar ratio) were associated with higher S-PTH concentrations and 24-h urinary Ca excretions, suggesting that low dietary Ca:P ratios may interfere with homeostasis of Ca metabolism and increase bone resorption. In summary, excessive dietary P intake in healthy Finnish women seems to be detrimental to Ca and bone metabolism, especially when dietary Ca intake is low. The results indicate that by increasing dietary Ca intake to the recommended level, the negative effects of high P intake could be diminished, but not totally prevented. These findings imply that phosphate additives may be more harmful than natural P. Thus, reduction of an excessively high dietary P intake is also beneficial for healthy individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I examine the portrayal of Jesus as a friend of toll collectors and sinners in the Third Gospel. I aim at a comprehensive view on the Lukan sinner texts, combining questions of the origin and development of these texts with the questions of Luke s theological message, of how the text functions as literature, and of the social-historical setting(s) behind the texts. Within New Testament scholarship researchers on the historical Jesus mostly still hold that a special mission to toll collectors and sinners was central in Jesus public activity. Within Lukan studies, M. Goulder, J. Kiilunen and D. Neale have claimed that this picture is due to Luke s theological vision and the liberties he took as an author. Their view is disputed by other Lukan scholars. I discuss methods which scholars have used to isolate the typical language of Luke s alleged written sources, or to argue for the source-free creation by Luke himself. I claim that the analysis of Luke s language does not help us to the origin of the Lukan pericopes. I examine the possibility of free creativity on Luke s part in the light of the invention technique used in ancient historiography. Invention was an essential part of all ancient historical writing and therefore quite probably Luke used it, too. Possibly Luke had access to special traditions, but the nature of oral tradition does not allow reconstruction. I analyze Luke 5:1-11; 5:27-32; 7:36-50; 15:1-32; 18:9-14; 19:1-10; 23:39-43. In most of these some underlying special tradition is possible though far from certain. It becomes evident that Luke s reshaping was so thorough that the pericopes as they now stand are decidedly Lukan creations. This is indicated by the characteristic Lukan story-telling style as well as by the strongly unified Lukan theology of the pericopes. Luke s sinners and Pharisees do not fit in the social-historical context of Jesus day. The story-world is one of polarized right and wrong. That Jesus is the Christ, representative of God, is an intrinsic part of the story-world. Luke wrote a theological drama inspired by tradition. He persuaded his audience to identify as (repenting) sinners. Luke's motive was that he saw the sinners in Jesus' company as forerunners of Gentile Christianity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite much research on forest biodiversity in Fennoscandia, the exact mechanisms of species declines in dead-wood dependent fungi are still poorly understood. In particular, there is only limited information on why certain fungal species have responded negatively to habitat loss and fragmentation, while others have not. Understanding the mechanisms behind species declines would be essential for the design and development of ecologically effective and scientifically informed conservation measures, and management practices that would promote biodiversity in production forests. In this thesis I study the ecology of polypores and their responses to forest management, with a particular focus on why some species have declined more than others. The data considered in the thesis comprise altogether 98,318 dead-wood objects, with 43,085 observations of 174 fungal species. Out of these, 1,964 observations represent 58 red-listed species. The data were collected from 496 sites, including woodland key habitats, clear-cuts with retention trees, mature managed forests, and natural or natural-like forests in southern Finland and Russian Karelia. I show that the most relevant way of measuring resource availability can differ to a great extent between species seemingly sharing the same resources. It is thus critical to measure the availability of resources in a way that takes into account the ecological requirements of the species. The results show that connectivity at the local, landscape and regional scales is important especially for the highly specialized species, many of which are also red-listed. Habitat loss and fragmentation affect not only species diversity but also the relative abundances of the species and, consequently, species interactions and fungal successional pathways. Changes in species distributions and abundances are likely to affect the food chains in which wood-inhabiting fungi are involved, and thus the functioning of the whole forest ecosystem. The findings of my thesis highlight the importance of protecting well-connected, large and high-quality forest areas to maintain forest biodiversity. Small habitat patches distributed across the landscape are likely to contribute only marginally to protection of red-listed species, especially if habitat quality is not substantially higher than in ordinary managed forest, as is the case with woodland key habitats. Key habitats might supplement the forest protection network if they were delineated larger and if harvesting of individual trees was prohibited in them. Taking the landscape perspective into account in the design and development of conservation measures is critical while striving to halt the decline of forest biodiversity in an ecologically effective manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Staphylococcus aureus is one of the most important bacteria that cause disease in humans, and methicillin-resistant S. aureus (MRSA) has become the most commonly identified antibiotic-resistant pathogen in many parts of the world. MRSA rates have been stable for many years in the Nordic countries and the Netherlands with a low MRSA prevalence in Europe, but in the recent decades, MRSA rates have increased in those low-prevalence countries as well. MRSA has been established as a major hospital pathogen, but has also been found increasingly in long-term facilities (LTF) and in communities of persons with no connections to the health-care setting. In Finland, the annual number of MRSA isolates reported to the National Infectious Disease Register (NIDR) has constantly increased, especially outside the Helsinki metropolitan area. Molecular typing has revealed numerous outbreak strains of MRSA, some of which have previously been associated with community acquisition. In this work, data on MRSA cases notified to the NIDR and on MRSA strain types identified with pulsed-field gel electrophoresis (PFGE), multilocus sequence typing (MLST), and staphylococcal cassette chromosome mec (SCCmec) typing at the National Reference Laboratory (NRL) in Finland from 1997 to 2004 were analyzed. An increasing trend in MRSA incidence in Finland from 1997 to 2004 was shown. In addition, non-multi-drug resistant (NMDR) MRSA isolates, especially those resistant only to methicillin/oxacillin, showed an emerging trend. The predominant MRSA strains changed over time and place, but two internationally spread epidemic strains of MRSA, FIN-16 and FIN-21, were related to the increase detected most recently. Those strains were also one cause of the strikingly increasing invasive MRSA findings. The rise of MRSA strains with SCCmec types IV or V, possible community-acquired MRSA was also detected. With questionnaires, the diagnostic methods used for MRSA identification in Finnish microbiology laboratories and the number of MRSA screening specimens studied were reviewed. Surveys, which focused on the MRSA situation in long-term facilities in 2001 and on the background information of MRSA-positive persons in 2001-2003, were also carried out. The rates of MRSA and screening practices varied widely across geographic regions. Part of the NMDR MRSA strains could remain undetected in some laboratories because of insufficient diagnostic techniques used. The increasing proportion of elderly population carrying MRSA suggests that MRSA is an emerging problem in Finnish long-term facilities. Among the patients, 50% of the specimens were taken on a clinical basis, 43% on a screening basis after exposure to MRSA, 3% on a screening basis because of hospital contact abroad, and 4% for other reasons. In response to an outbreak of MRSA possessing a new genotype that occurred in a health care ward and in an associated nursing home of a small municipality in Northern Finland in autumn 2003, a point-prevalence survey was performed six months later. In the same study, the molecular epidemiology of MRSA and methicillin-sensitive S. aureus (MSSA) strains were also assessed, the results to the national strain collection compared, and the difficulties of MRSA screening with low-level oxacillin-resistant isolates encountered. The original MRSA outbreak in LTF, which consisted of isolates possessing a nationally new PFGE profile (FIN-22) and internationally rare MLST type (ST-27), was confined. Another previously unrecognized MRSA strain was found with additional screening, possibly indicating that current routine MRSA screening methods may be insufficiently sensitive for strains possessing low-level oxacillin resistance. Most of the MSSA strains found were genotypically related to the epidemic MRSA strains, but only a few of them had received the SCCmec element, and all those strains possessed the new SCCmec type V. In the second largest nursing home in Finland, the colonization of S. aureus and MRSA, and the role of screening sites along with broth enrichment culture on the sensitivity to detect S. aureus were studied. Combining the use of enrichment broth and perineal swabbing, in addition to nostrils and skin lesions swabbing, may be an alternative for throat swabs in the nursing home setting, especially when residents are uncooperative. Finally, in order to evaluate adequate phenotypic and genotypic methods needed for reliable laboratory diagnostics of MRSA, oxacillin disk diffusion and MIC tests to the cefoxitin disk diffusion method at both +35°C and +30°C, both with or without an addition of sodium chloride (NaCl) to the Müller Hinton test medium, and in-house PCR to two commercial molecular methods (the GenoType® MRSA test and the EVIGENETM MRSA Detection test) with different bacterial species in addition to S. aureus were compared. The cefoxitin disk diffusion method was superior to that of oxacillin disk diffusion and to the MIC tests in predicting mecA-mediated resistance in S. aureus when incubating at +35°C with or without the addition of NaCl to the test medium. Both the Geno Type® MRSA and EVIGENETM MRSA Detection tests are usable, accurate, cost-effective, and sufficiently fast methods for rapid MRSA confirmation from a pure culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ruptured abdominal aortic aneurysm (RAAA) is a life-threatening event, and without operative treatment the patient will die. The overall mortality can be as high as 80-90%; thus repair of RAAA should be attempted whenever feasible. The quality of life (QoL) has become an increasingly important outcome measure in vascular surgery. Aim of the study was to evaluate outcomes of RAAA and to find out predictors of mortality. In Helsinki and Uusimaa district 626 patients were identified to have RAAA in 1996-2004. Altogether 352 of them were admitted to Helsinki University Central Hospital (HUCH). Based on Finnvasc Registry, 836 RAAA patients underwent repair of RAAA in 1991-1999. The 30-day operative mortality, hospital and population-based mortality were assessed, and the effect of regional centralisation and improving in-hospital quality on the outcome of RAAA. QoL was evaluated by a RAND-36 questionnaire of survivors of RAAA. Quality-adjusted life years (QALYs), which measure length and QoL, were calculated using the EQ-5D index and estimation of life expectancy. The predictors of outcome after RAAA were assessed at admission and 48 hours after repair of RAAA. The 30-day operative mortality rate was 38% in HUCH and 44% nationwide, whereas the hospital mortality was 45% in HUCH. Population-based mortality was 69% in 1996-2004 and 56% in 2003-2004. After organisational changes were undertaken, the mortality decreased significantly at all levels. Among the survivors, the QoL was almost equal when compared with norms of age- and sex-matched controls; only physical functioning was slightly impaired. Successful repair of RAAA gave a mean of 4.1 (0-30.9) QALYs for all RAAA patients, although non-survivors were included. The preoperative Glasgow Aneurysm Score was an independent predictor of 30-day operative mortality after RAAA, and it also predicted the outcome at 48- hours for initial survivors of repair of RAAA. A high Glasgow Aneurysm Score and high age were associated with low numbers of QALYs to be achieved. Organ dysfunction measured by the Sequential Organ Failure Assessment (SOFA) score at 48 hours after repair of RAAA was the strongest predictor of death. In conclusion surgery of RAAA is a life-saving and cost-effective procedure. The centralisation of vascular emergencies improved the outcome of RAAA patients. The survivors had a good QoL after RAAA. Predictive models can be used on individual level only to provide supplementary information for clinical decision-making due to their moderate discriminatory value. These results support an active operation policy, as there is no reliable measure to predict the outcome after RAAA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within the last 15 years, several new leukoencephalopathies have been recognized. However, more than half of children with cerebral white matter abnormalities still have no specific diagnosis. Our aim was to classify unknown leukoencephalopathies and to identify new diseases among them. During the study, three subgroups of patients were delineated and examined further. First, we evaluated 38 patients with unknown leukoencephalopathy. Brain MRI findings were grouped into seven categories according to the predominant location of the abnormalities. The largest subgroups were myelination abnormalities (n=20) and periventricular white matter abnormalities (n=12). Six patients had uniform MRI findings with signal abnormalities in hemispheric white matter and in selective brain stem and spinal cord tracts. Magnetic resonance spectroscopy (MRS) showed elevated lactate and decreased N-acetylaspartate in the abnormal white matter. The patients presented with ataxia, tremor, distal spasticity, and signs of dorsal column dysfunction. This phenotype - leukoencephalopathy with brain stem and spinal cord involvement and elevated white matter lactate (LBSL) - was first published elsewhere in 2003. A new finding was development of a mild axonal neuropathy. The etiopathogenesis of this disease is unknown, but elevated white matter lactate in MRS suggests a mitochondrial disorder. Secondly, we studied 22 patients with 18q deletions. Clinical and MRI findings were correlated with molecularly defined size of the deletion. All patients with deletions between markers D18S469 and D18S1141 (n=18) had abnormal myelination in brain MRI, while four patients with interstitial deletions sparing that region, had normal myelination pattern. Haploinsufficiency of myelin basic protein is suggested to be responsible for this dysmyelination. Congenital aural atresia/stenosis was found in 50% of the cases and was associated with deletions between markers D18S812 (at 18q22.3) and D18S1141 (at q23). Last part of the study comprised 13 patients with leukoencephalopathy and extensive cerebral calcifications. They showed a spectrum of findings, including progressive cerebral cysts, retinal telangiectasias and angiomas, intrauterine growth retardation, skeletal and hematologic abnormalities, and severe intestinal bleeding, which overlap with features of the previously reported patients with "Coats plus" syndrome and "leukoencephalopathy with calcifications and cysts", suggesting that these disorders are related. All autopsied patients had similar neuropathologic findings showing calcifying obliterative microangiopathy. Our patients may represent an autosomally recessively inherited disorder because there were affected siblings and patients of both sexes. We have started genealogic and molecular genetic studies of this disorder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for special education (SE) is increasing. The majority of those whose problems are due to neurodevelopmental disorders have no specific aetiology. The aim of this study was to evaluate the contribution of prenatal and perinatal factors and factors associated with growth and development to later need for full-time SE and to assess joint structural and volumetric brain alterations among subjects with unexplained, familial need for SE. A random sample of 900 subjects in full-time SE allocated into three levels of neurodevelopmental problems and 301 controls in mainstream education (ME) provided data on socioeconomic factors, pregnancy, delivery, growth, and development. Of those, 119 subjects belonging to a sibling-pair in full-time SE with unexplained aetiology and 43 controls in ME underwent brain magnetic resonance imaging (MRI). Analyses of structural brain alterations and midsagittal area and diameter measurements were made. Voxel-based morphometry (VBM) analysis provided detailed information on regional grey matter, white matter, and cerebrospinal fluid (CSF) volume differences. Father’s age ≥ 40 years, low birth weight, male sex, and lower socio-economic status all increased the probability of SE placement. At age 1 year, one standard deviation score decrease in height raised the probability of SE placement by 40% and in head circumference by 28%. At infancy, the gross motor milestones differentiated the children. From age 18 months, the fine motor milestones and those related to speech and social skills became more important. Brain MRI revealed no specific aetiology for subjects in SE. However, they had more often ≥ 3 abnormal findings in MRIs (thin corpus callosum and enlarged cerebral and cerebellar CSF spaces). In VBM, subjects in full-time SE had smaller global white matter, CSF, and total brain volumes than controls. Compared with controls, subjects with intellectual disabilities had regional volume alterations (greater grey matter volumes in the anterior cingulate cortex bilaterally, smaller grey matter volume in left thalamus and left cerebellar hemisphere, greater white matter volume in the left fronto-parietal region, and smaller white matter volumes bilaterally in the posterior limbs of the internal capsules). In conclusion, the epidemiological studies emphasized several factors that increased the probability of SE placement, useful as a framework for interventional studies. The global and regional brain MRI findings provide an interesting basis for future investigations of learning-related brain structures in young subjects with cognitive impairments or intellectual disabilities of unexplained, familial aetiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soy-derived phytoestrogen genistein and 17β-estradiol (E2), the principal endogenous estrogen in women, are also potent antioxidants protecting LDL and HDL lipoproteins against oxidation. This protection is enhanced by esterification with fatty acids, resulting in lipophilic molecules that accumulate in lipoproteins or fatty tissues. The aims were to investigate, whether genistein becomes esterified with fatty acids in human plasma accumulating in lipoproteins, and to develop a method for their quantitation; to study the antioxidant activity of different natural and synthetic estrogens in LDL and HDL; and to determine the E2 esters in visceral and subcutaneous fat in late pregnancy and in pre- and postmenopause. Human plasma was incubated with [3H]genistein and its esters were analyzed from lipoprotein fractions. Time-resolved fluoroimmunoassay (TR-FIA) was used to quantitate genistein esters in monkey plasma after subcutaneous and oral administration. The E2 esters in women s serum and adipose tissue were also quantitated using TR-FIA. The antioxidant activity of estrogen derivatives (n=43) on LDL and HDL was assessed by monitoring the copper induced formation of conjugated dienes. Human plasma was shown to produce lipoprotein-bound genistein fatty acid esters, providing a possible explanation for the previously reported increased oxidation resistance of LDL particles during intake of soybean phytoestrogens. Genistein esters were introduced into blood by subcutaneous administration. The antioxidant effect of estrogens on lipoproteins is highly structure-dependent. LDL and HDL were protected against oxidation by many unesterified, yet lipophilic derivatives. The strongest antioxidants had an unsubstituted A-ring phenolic hydroxyl group with one or two adjacent methoxy groups. E2 ester levels were high during late pregnancy. The median concentration of E2 esters in pregnancy serum was 0.42 nmol/l (n=13) and in pre- (n=8) and postmenopause (n=6) 0.07 and 0.06 nmol/l, respectively. In pregnancy visceral fat the concentration of E2 esters was 4.24 nmol/l and in pre- and postmenopause 0.82 and 0.74 nmol/l. The results from subcutaneous fat were similar. In serum and fat during pregnancy, E2 esters constituted about 0.5 and 10% of the free E2. In non-pregnant women most of the E2 in fat was esterified (the ester/free ratio 150 - 490%). In postmenopause, E2 levels in fat highly exceeded those in serum, the majority being esterified. The pathways for fatty acid esterification of steroid hormones are found in organisms ranging from invertebrates to vertebrates. The evolutionary preservation and relative abundance of E2 esters, especially in fat tissue, suggest a biological function, most likely in providing a readily available source of E2. The body s own estrogen reservoir could be used as a source of E2 by pharmacologically regulating the E2 esterification or hydrolysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Juvenile idiopathic arthritis (JIA) is a heterogeneous group of childhood chronic arthritides, associated with chronic uveitis in 20% of cases. For JIA patients responding inadequately to conventional disease-modifying anti-rheumatic drugs (DMARDs), biologic therapies, anti-tumor necrosis factor (anti-TNF) agents are available. In this retrospective multicenter study, 258 JIA-patients refractory to DMARDs and receiving biologic agents during 1999-2007 were included. Prior to initiation of anti-TNFs, growth velocity of 71 patients was delayed in 75% and normal in 25%. Those with delayed growth demonstrated a significant increase in growth velocity after initiation of anti-TNFs. Increase in growth rate was unrelated to pubertal growth spurt. No change was observed in skeletal maturation before and after anti-TNFs. The strongest predictor of change in growth velocity was growth rate prior to anti-TNFs. Change in inflammatory activity remained a significant predictor even after decrease in glucocorticoids was taken into account. In JIA-associated uveitis, impact of two first-line biologic agents, etanercept and infliximab, and second-line or third-line anti-TNF agent, adalimumab, was evaluated. In 108 refractory JIA patients receiving etanercept or infliximab, uveitis occurred in 45 (42%). Uveitis improved in 14 (31%), no change was observed in 14 (31%), and in 17 (38%) uveitis worsened. Uveitis improved more frequently (p=0.047) and frequency of annual uveitis flares was lower (p=0.015) in those on infliximab than in those on etanercept. In 20 patients taking adalimumab, 19 (95%) had previously failed etanercept and/or infliximab. In 7 patients (35%) uveitis improved, in one (5%) worsened, and in 12 (60%) no change occurred. Those with improved uveitis were younger and had shorter disease duration. Serious adverse events (AEs) or side-effects were not observed. Adalimumab was effective also in arthritis. Long-term drug survival (i.e. continuation rate on drug) with etanercept (n=105) vs. infliximab (n=104) was at 24 months 68% vs. 68%, and at 48 months 61% vs. 48% (p=0.194 in log-rank analysis). First-line anti-TNF agent was discontinued either due to inefficacy (etanercept 28% vs. infliximab 20%, p=0.445), AEs (7% vs. 22%, p=0.002), or inactive disease (10% vs. 16%, p=0.068). Females, patients with systemic JIA (sJIA), and those taking infliximab as the first therapy were at higher risk for treatment discontinuation. One-third switched to the second anti-TNF agent, which was discontinued less often than the first. In conclusion, in refractory JIA anti-TNFs induced enhanced growth velocity. Four-year treatment survival was comparable between etanercept and infliximab, and switching from first-line to second-line agent a reasonable therapeutic option. During anti-TNF treatment, one-third with JIA-associated anterior uveitis improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Snow cover is very sensitive to climate change and has a large feedback effect on the climate system due to the high albedo. Snow covers almost all surfaces in Antarctica and small changes in snow properties can mean large changes in absorbed radiation. In the ongoing discussion of climatic change, the mass balance of Antarctica has received increasing focus during recent decades, since its reaction to global warming strongly influences sea-level change. The aim of the present work was to examine the spatial and temporal variations in the physical and chemical characteristics of surface snow and annual accumulation rates in western Dronning Maud Land, Antarctica. The data were collected along a 350-km-long transect from the coast to the plateau during the years 1999-2004 as a part of the Finnish Antarctic Research Programme (FINNARP). The research focused on the most recent annual accumulation in the coastal area. The results show that the distance from the sea, and the moisture source, was the most predominant factor controlling the variations in both physical (conductivity, grain size, oxygen isotope ratio and accumulation) and chemical snow properties. The sea-salt and sulphur-containing components predominated in the coastal region. The local influences of nunataks and topographic highs were also visible on snow. The variations in all measured properties were wide within single sites mostly due to redistribution by winds and sastrugi topography, which reveals the importance of the spatially representative measurements. The mean accumulations occurred on the ice shelf, in the coastal region and on the plateau: 312 ± 28, 215 ± 43 and 92 ± 25 mm w.e., respectively. Depth hoar layers were usually found under the thin ice crust and were associated with a low dielectric constant and high concentrations of nitrate. Taking into account the vast size of the Antarctic ice sheet and its geographic characteristics, it is important to extend investigation of the distribution of surface snow properties and accumulation to provide well-documented data.