8 resultados para Total Maximum Daily Load Program (Ill.)

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Olkiluoto Island is situated in the northern Baltic Sea, near the southwestern coast of Finland, and is the proposed location of a spent nuclear fuel repository. This study examined Holocene palaeoseismicity in the Olkiluoto area and in the surrounding sea areas by computer simulations together with acoustic-seismic, sedimentological and dating methods. The most abundant rock type on the island is migmatic mica gneiss, intruded by tonalites, granodiorites and granites. The surrounding Baltic Sea seabed consists of Palaeoproterozoic crystalline bedrock, which is to a great extent covered by younger Mesoproterozoic sedimentary rocks. The area contains several ancient deep-seated fracture zones that divide it into bedrock blocks. The response of bedrock at the Olkiluoto site was modelled considering four future ice-age scenarios. Each scenario produced shear displacements of fractures with different times of occurrence and varying recovery rates. Generally, the larger the maximum ice load, the larger were the permanent shear displacements. For a basic case, the maximum shear displacements were a few centimetres at the proposed nuclear waste repository level, at proximately 500 m b.s.l. High-resolution, low-frequency echo-sounding was used to examine the Holocene submarine sedimentary structures and possible direct and indirect indicators of palaeoseismic activity in the northern Baltic Sea. Echo-sounding profiles of Holocene submarine sediments revealed slides and slumps, normal faults, debris flows and turbidite-type structures. The profiles also showed pockmarks and other structures related to gas or groundwater seepages, which might be related to fracture zone activation. Evidence of postglacial reactivation in the study area was derived from the spatial occurrence of some of the structures, especial the faults and the seepages, in the vicinity of some old bedrock fracture zones. Palaeoseismic event(s) (a single or several events) in the Olkiluoto area were dated and the palaeoenvironment was characterized using palaeomagnetic, biostratigraphical and lithostratigraphical methods, enhancing the reliability of the chronology. Combined lithostratigraphy, biostratigraphy and palaeomagnetic stratigraphy revealed an age estimation of 10 650 to 10 200 cal. years BP for the palaeoseismic event(s). All Holocene sediment faults in the northern Baltic Sea occur at the same stratigraphical level, the age of which is estimated at 10 700 cal. years BP (9500 radiocarbon years BP). Their movement is suggested to have been triggered by palaeoseismic event(s) when the Late Weichselian ice sheet was retreating from the site and bedrock stresses were released along the bedrock fracture zones. Since no younger or repeated traces of seismic events were found, it corroborates the suggestion that the major seismic activity occurred within a short time during and after the last deglaciation. The origin of the gas/groundwater seepages remains unclear. Their reflections in the echo-sounding profiles imply that part of the gas is derived from the organic-bearing Litorina and modern gyttja clays. However, at least some of the gas is derived from the bedrock. Additional information could be gained by pore water analysis from the pockmarks. Information on postglacial fault activation and possible gas and/or fluid discharges under high hydraulic heads has relevance in evaluating the safety assessment of a planned spent nuclear fuel repository in the region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acute renal failure (ARF) is a clinical syndrome characterized by rapidly decreasing glomerular filtration rate, which results in disturbances in electrolyte- and acid-base homeostasis, derangement of extracellular fluid volume, and retention of nitrogenous waste products, and is often associated with decreased urine output. ARF affects about 5-25% of patients admitted to intensive care units (ICUs), and is linked to high mortality and morbidity rates. In this thesis outcome of critically ill patients with ARF and factors related to outcome were evaluated. A total of 1662 patients from two ICUs and one acute dialysis unit in Helsinki University Hospital were included. In study I the prevalence of ARF was calculated and classified according to two ARF-specific scoring methods, the RIFLE classification and the classification created by Bellomo et al. (2001). Study II evaluated monocyte human histocompatibility leukocyte antigen-DR (HLA-DR) expression and plasma levels of one proinflammatory (interleukin (IL) 6) and two anti-inflammatory (IL-8 and IL-10) cytokines in predicting survival of critically ill ARF patients. Study III investigated serum cystatin C as a marker of renal function in ARF and its power in predicting survival of critically ill ARF patients. Study IV evaluated the effect of intermittent hemodiafiltration (HDF) on myoglobin elimination from plasma in severe rhabdomyolysis. Study V assessed long-term survival and health-related quality of life (HRQoL) in ARF patients. Neither of the ARF-specific scoring methods presented good discriminative power regarding hospital mortality. The maximum RIFLE score for the first three days in the ICU was an independent predictor of hospital mortality. As a marker of renal dysfunction, serum cystatin C failed to show benefit compared with plasma creatinine in detecting ARF or predicting patient survival. Neither cystatin C nor plasma concentrations of IL-6, IL-8, and IL-10, nor monocyte HLA-DR expression were clinically useful in predicting mortality in ARF patients. HDF may be used to clear myoglobin from plasma in rhabdomyolysis, especially if the alkalization of diuresis does not succeed. The long-term survival of patients with ARF was found to be poor. The HRQoL of those who survive is lower than that of the age- and gender-matched general population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation-induced runoff and leaching from milled peat mining mires by peat types: a comparative method for estimating the loading of water bodies during peat production. This research project in environmental geology has arisen out of an observed need to be able to predict more accurately the loading of watercourses with detrimental organic substances and nutrients from already existing and planned peat production areas, since the authorities capacity for insisting on such predictions covering the whole duration of peat production in connection with evaluations of environmental impact is at present highly limited. National and international decisions regarding monitoring of the condition of watercourses and their improvement and restoration require more sophisticated evaluation methods in order to be able to forecast watercourse loading and its environmental impacts at the stage of land-use planning and preparations for peat production.The present project thus set out from the premise that it would be possible on the basis of existing mire and peat data properties to construct estimates for the typical loading from production mires over the whole duration of their exploitation. Finland has some 10 million hectares of peatland, accounting for almost a third of its total area. Macroclimatic conditions have varied in the course of the Holocene growth and development of this peatland, and with them the habitats of the peat-forming plants. Temperatures and moisture conditions have played a significant role in determining the dominant species of mire plants growing there at any particular time, the resulting mire types and the accumulation and deposition of plant remains to form the peat. The above climatic, environmental and mire development factors, together with ditching, have contributed, and continue to contribute, to the existence of peat horizons that differ in their physical and chemical properties, leading to differences in material transport between peatlands in a natural state and mires that have been ditched or prepared for forestry and peat production. Watercourse loading from the ditching of mires or their use for peat production can have detrimental effects on river and lake environments and their recreational use, especially where oxygen-consuming organic solids and soluble organic substances and nutrients are concerned. It has not previously been possible, however, to estimate in advance the watercourse loading likely to arise from ditching and peat production on the basis of the characteristics of the peat in a mire, although earlier observations have indicated that watercourse loading from peat production can vary greatly and it has been suggested that differences in peat properties may be of significance in this. Sprinkling is used here in combination with simulations of conditions in a milled peat production area to determine the influence of the physical and chemical properties of milled peats in production mires on surface runoff into the drainage ditches and the concentrations of material in the runoff water. Sprinkling and extraction experiments were carried out on 25 samples of milled Carex (C) and Sphagnum (S) peat of humification grades H 2.5 8.5 with moisture content in the range 23.4 89% on commencement of the first sprinkling, which was followed by a second sprinkling 24 hours later. The water retention capacity of the peat was best, and surface runoff lowest, with Sphagnum and Carex peat samples of humification grades H 2.5 6 in the moisture content class 56 75%. On account of the hydrophobicity of dry peat, runoff increased in a fairly regular manner with drying of the sample from 55% to 24 30%. Runoff from the samples with an original moisture content over 55% increased by 63% in the second round of sprinkling relative to the first, as they had practically reached saturation point on the first occasion, while those with an original moisture content below 55% retained their high runoff in the second round, due to continued hydrophobicity. The well-humified samples (H 6.5 8.5) with a moisture content over 80% showed a low water retention capacity and high runoff in both rounds of sprinkling. Loading of the runoff water with suspended solids, total phosphorus and total nitrogen, and also the chemical oxygen demand (CODMn O2), varied greatly in the sprinkling experiment, depending on the peat type and degree of humification, but concentrations of the same substances in the two sprinklings were closely or moderately closely correlated and these correlations were significant. The concentrations of suspended solids in the runoff water observed in the simulations of a peat production area and the direct surface runoff from it into the drainage ditch system in response to rain (sprinkling intensity 1.27 mm/min) varied c. 60-fold between the degrees of humification in the case of the Carex peats and c. 150-fold for the Sphagnum peats, while chemical oxygen demand varied c. 30-fold and c. 50-fold, respectively, total phosphorus c. 60-fold and c. 66-fold, total nitrogen c. 65-fold and c. 195-fold and ammonium nitrogen c. 90-fold and c. 30-fold. The increases in concentrations in the runoff water were very closely correlated with increases in humification of the peat. The correlations of the concentrations measured in extraction experiments (48 h) with peat type and degree of humification corresponded to those observed in the sprinkler experiments. The resulting figures for the surface runoff from a peat production area into the drainage ditches simulated by means of sprinkling and material concentrations in the runoff water were combined with statistics on the mean extent of daily rainfall (0 67 mm) during the frost-free period of the year (May October) over an observation period of 30 years to yield typical annual loading figures (kg/ha) for suspended solids (SS), chemical oxygen demand of organic matter (CODmn O2), total phosphorus (tot. P) and total nitrogen (tot. N) entering the ditches with respect to milled Carex (C) and Sphagnum (S) peats of humification grades H 2.5 8.5. In order to calculate the loading of drainage ditches from a milled peat production mire with the aid of these annual comparative values (in kg/ha), information is required on the properties of the intended production mire and its peat. Once data are available on the area of the mire, its peat depth, peat types and their degrees of humification, dry matter content, calorific value and corresponding energy content, it is possible to produce mutually comparable estimates for individual mires with respect to the annual loading of the drainage ditch system and the surrounding watercourse for the whole service life of the production area, the duration of this service life, determinations of energy content and the amount of loading per unit of energy generated (kg/MWh). In the 8 mires in the Köyhäjoki basin, Central Ostrobothnia, taken as an example, the loading of suspended solids (SS) in the drainage ditch networks calculated on the basis of the typical values obtained here and existing mire and peat data and expressed per unit of energy generated varied between the mires and horizons in the range 0.9 16.5 kg/MWh. One of the aims of this work was to develop means of making better use of existing mire and peat data and the results of corings and other field investigations. In this respect combination of the typical loading values (kg/ha) obtained here for S, SC, CS and C peats and the various degrees of humification (H 2.5 8.5) with the above mire and peat data by means of a computer program for the acquisition and handling of such data would enable all the information currently available and that deposited in the system in the future to be used for defining watercourse loading estimates for mires and comparing them with the corresponding estimates of energy content. The intention behind this work has been to respond to the challenge facing the energy generation industry to find larger peat production areas that exert less loading on the environment and to that facing the environmental authorities to improve the means available for estimating watercourse loading from peat production and its environmental impacts in advance. The results conform well to the initial hypothesis and to the goals laid down for the research and should enable watercourse loading from existing and planned peat production to be evaluated better in the future and the resulting impacts to be taken into account when planning land use and energy generation. The advance loading information available in this way would be of value in the selection of individual peat production areas, the planning of their exploitation, the introduction of water protection measures and the planning of loading inspections, in order to achieve controlled peat production that pays due attention to environmental considerations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Cardiovascular disease (CVD) remains the most serious threat to life and health in industrialized countries. Atherosclerosis is the main underlying pathology associated with CVD, in particular coronary artery disease (CAD), ischaemic stroke, and peripheral arterial disease. Risk factors play an important role in initiating and accelerating the complex process of atherosclerosis. Most studies of risk factors have focused on the presence or absence of clinically defined CVD. Less is known about the determinants of the severity and extent of atherosclerosis in symptomatic patients. Aims. To clarify the association between coronary and carotid artery atherosclerosis, and to study the determinants associated with these abnormalities with special regard to novel cardiovascular risk factors. Subjects and methods. Quantitative coronary angiography (QCA) and B-mode ultrasound were used to assess coronary and carotid artery atherosclerosis in 108 patients with clinically suspected CAD referred for elective coronary angiography. To evaluate anatomic severity and extent of CAD, several QCA parameters were incorporated into indexes. These measurements reflected CAD severity, extent, and overall atheroma burden and were calculated for the entire coronary tree and separately for different coronary segments (i.e., left main, proximal, mid, and distal segments). Maximum and mean intima-media thickness (IMT) values of carotid arteries were measured and expressed as mean aggregate values. Furthermore, the study design included extensive fasting blood samples, oral glucose tolerance test, and an oral fat-load test to be performed in each participant. Results. Maximum and mean IMT values were significantly correlated with CAD severity, extent, and atheroma burden. There was heterogeneity in associations between IMT and CAD indexes according to anatomical location of CAD. Maximum and mean IMT values, respectively, were correlated with QCA indexes for mid and distal segments but not with the proximal segments of coronary vessels. The values of paraoxonase-1 (PON1) activity and concentration, respectively, were lower in subjects with significant CAD and there was a significant relationship between PON1 activity and concentration and coronary atherosclerosis assessed by QCA. PON1 activity was a significant determinant of severity of CAD independently of HDL cholesterol. Neither PON1 activity nor concentration was associated with carotid IMT. The concentration of triglycerides (TGs), triglyceride-rich lipoproteins (TRLs), oxidized LDL (oxLDL), and the cholesterol content of remnant lipoprotein particle (RLP-C) were significantly increased at 6 hours after intake of an oral fatty meal as compared with fasting values. The mean peak size of LDL remained unchanged 6 hours after the test meal. The correlations between total TGs, TRLs, and RLP-C in fasting and postprandial state were highly significant. RLP-C correlated with oxLDL both in fasting and in fed state and inversely with LDL size. In multivariate analysis oxLDL was a determinant of severity and extent of CAD. Neither total TGs, TRLs, oxLDL, nor LDL size were linked to carotid atherosclerosis. Insulin resistance (IR) was associated with an increased severity and extent of coronary atherosclerosis and seemed to be a stronger predictor of coronary atherosclerosis in the distal parts of the coronary tree than in the proximal and mid parts. In the multivariate analysis IR was a significant predictor of the severity of CAD. IR did not correlate with carotid IMT. Maximum and mean carotid IMT were higher in patients with the apoE4 phenotype compared with subjects with the apoE3 phenotype. Likewise, patients with the apoE4 phenotype had a more severe and extensive CAD than individuals with the apoE3 phenotype. Conclusions. 1) There is an association between carotid IMT and the severity and extent of CAD. Carotid IMT seems to be a weaker predictor of coronary atherosclerosis in the proximal parts of the coronary tree than in the mid and distal parts. 2) PON1 activity has an important role in the pathogenesis of coronary atherosclerosis. More importantly, the study illustrates how the protective role of HDL could be modulated by its components such that equivalent serum concentrations of HDL cholesterol may not equate with an equivalent, potential protective capacity. 3) RLP-C in the fasting state is a good marker of postprandial TRLs. Circulating oxLDL increases in CAD patients postprandially. The highly significant positive correlation between postprandial TRLs and postprandial oxLDL suggests that the postprandial state creates oxidative stress. Our findings emphasize the fundamental role of LDL oxidation in the development of atherosclerosis even after inclusion of conventional CAD risk factors. 4) Disturbances in glucose metabolism are crucial in the pathogenesis of coronary atherosclerosis. In fact, subjects with IR are comparable with diabetic subjects in terms of severity and extent of CAD. 5) ApoE polymorphism is involved in the susceptibility to both carotid and coronary atherosclerosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoporosis is not only a disease of the elderly, but is increasingly diagnosed in chronically ill children. Children with severe motor disabilities, such as cerebral palsy (CP), have many risk factors for osteoporosis. Adults with intellectual disability (ID) are also prone to low bone mineral density (BMD) and increased fractures. This study was carried out to identify risk factors for low BMD and osteoporosis in children with severe motor disability and in adults with ID. In this study 59 children with severe motor disability, ranging in age from 5 to 16 years were evaluated. Lumbar spine BMD was measured with dual-energy x-ray absorptiometry. BMD values were corrected for bone size by calculating bone mineral apparent density (BMAD), and for bone age. The values were transformed into Z-scores by comparison with normative data. Spinal radiographs were assessed for vertebral morphology. Blood samples were obtained for biochemical parameters. Parents were requested to keep a food diary for three days. The median daily energy and nutrient intakes were calculated. Fractures were common; 17% of the children had sustained peripheral fractures and 25% had compression fractures. BMD was low in children; the median spinal BMAD Z-score was -1.0 (range -5.0 – +2.0) and the BMAD Z-score <-2.0 in 20% of the children. Low BMAD Z-score and hypercalciuria were significant risk factors for fractures. In children with motor disability, calcium intakes were sufficient, while total energy and vitamin D intakes were not. In the vitamin D intervention studies, 44 children and adolescents with severe motor disability and 138 adults with ID were studied. After baseline blood samples, the children were divided into two groups; those in the treatment group received 1000 IU peroral vitamin D3 five days a week for 10 weeks, and subjects in the control group continued with their normal diet. Adults with ID were allocated to receive either 800 IU peroral vitamin D3 daily for six months or a single intramuscular injection of 150 000 IU D3. Blood samples were obtained at baseline and after treatment. Serum concentrations of 25-OH-vitamin D (S-25-OHD) were low in all subgroups before vitamin D intervention: in almost 60% of children and in 77% of adults the S-25-OHD concentration was below 50 nmol/L, indicating vitamin D insufficiency. After vitamin D intervention, 19% of children and 42% adults who received vitamin D perorally and 12% of adults who received vitamin D intramuscularly had optimal S-25-OHD (>80 nmol/L). This study demonstrated that low BMD and peripheral and spinal fractures are common in children with severe motor disabilities. Vitamin D status was suboptimal in the majority of children with motor disability and adults with ID. Vitamin D insufficiency can be corrected with vitamin D supplements; the peroral dose should be at least 800 IU per day.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyclosporine is an immunosuppressant drug with a narrow therapeutic index and large variability in pharmacokinetics. To improve cyclosporine dose individualization in children, we used population pharmacokinetic modeling to study the effects of developmental, clinical, and genetic factors on cyclosporine pharmacokinetics in altogether 176 subjects (age range: 0.36–20.2 years) before and up to 16 years after renal transplantation. Pre-transplantation test doses of cyclosporine were given intravenously (3 mg/kg) and orally (10 mg/kg), on separate occasions, followed by blood sampling for 24 hours (n=175). After transplantation, in a total of 137 patients, cyclosporine concentration was quantified at trough, two hours post-dose, or with dose-interval curves. One-hundred-four of the studied patients were genotyped for 17 putatively functionally significant sequence variations in the ABCB1, SLCO1B1, ABCC2, CYP3A4, CYP3A5, and NR1I2 genes. Pharmacokinetic modeling was performed with the nonlinear mixed effects modeling computer program, NONMEM. A 3-compartment population pharmacokinetic model with first order absorption without lag-time was used to describe the data. The most important covariate affecting systemic clearance and distribution volume was allometrically scaled body weight i.e. body weight**3/4 for clearance and absolute body weight for volume of distribution. The clearance adjusted by absolute body weight declined with age and pre-pubertal children (< 8 years) had an approximately 25% higher clearance/body weight (L/h/kg) than did older children. Adjustment of clearance for allometric body weight removed its relationship to age after the first year of life. This finding is consistent with a gradual reduction in relative liver size towards adult values, and a relatively constant CYP3A content in the liver from about 6–12 months of age to adulthood. The other significant covariates affecting cyclosporine clearance and volume of distribution were hematocrit, plasma cholesterol, and serum creatinine, explaining up to 20%–30% of inter-individual differences before transplantation. After transplantation, their predictive role was smaller, as the variations in hematocrit, plasma cholesterol, and serum creatinine were also smaller. Before transplantation, no clinical or demographic covariates were found to affect oral bioavailability, and no systematic age-related changes in oral bioavailability were observed. After transplantation, older children receiving cyclosporine twice daily as the gelatine capsule microemulsion formulation had an about 1.25–1.3 times higher bioavailability than did the younger children receiving the liquid microemulsion formulation thrice daily. Moreover, cyclosporine oral bioavailability increased over 1.5-fold in the first month after transplantation, returning thereafter gradually to its initial value in 1–1.5 years. The largest cyclosporine doses were administered in the first 3–6 months after transplantation, and thereafter the single doses of cyclosporine were often smaller than 3 mg/kg. Thus, the results suggest that cyclosporine displays dose-dependent, saturable pre-systemic metabolism even at low single doses, whereas complete saturation of CYP3A4 and MDR1 (P-glycoprotein) renders cyclosporine pharmacokinetics dose-linear at higher doses. No significant associations were found between genetic polymorphisms and cyclosporine pharmacokinetics before transplantation in the whole population for which genetic data was available (n=104). However, in children older than eight years (n=22), heterozygous and homozygous carriers of the ABCB1 c.2677T or c.1236T alleles had an about 1.3 times or 1.6 times higher oral bioavailability, respectively, than did non-carriers. After transplantation, none of the ABCB1 SNPs or any other SNPs were found to be associated with cyclosporine clearance or oral bioavailability in the whole population, in the patients older than eight years, or in the patients younger than eight years. In the whole population, in those patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055C haplotype, however, the bioavailability of cyclosporine was about one tenth lower, per allele, than in non-carriers. This effect was significant also in a subgroup of patients older than eight years. Furthermore, in patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055T haplotype, the bioavailability was almost one fifth higher, per allele, than in non-carriers. It may be possible to improve individualization of cyclosporine dosing in children by accounting for the effects of developmental factors (body weight, liver size), time after transplantation, and cyclosporine dosing frequency/formulation. Further studies are required on the predictive value of genotyping for individualization of cyclosporine dosing in children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar ultraviolet (UV) radiation has a broad range of effects concerning life on Earth. Soon after the mid-1980s, it was recognized that the stratospheric ozone content was declining over large areas of the globe. Because the stratospheric ozone layer protects life on Earth from harmful UV radiation, this lead to concern about possible changes in the UV radiation due to anthropogenic activity. Initiated by this concern, many stations for monitoring of the surface UV radiation were founded in the late 1980s and early 1990s. As a consequence, there is an apparent lack of information on UV radiation further in the past: measurements cannot tell us how the UV radiation levels have changed on time scales of, for instance, several decades. The aim of this thesis was to improve our understanding of past variations in the surface UV radiation by developing techniques for UV reconstruction. Such techniques utilize commonly available meteorological data together with measurements of the total ozone column for reconstructing, or estimating, the amount of UV radiation reaching Earth's surface in the past. Two different techniques for UV reconstruction were developed. Both are based on first calculating the clear-sky UV radiation using a radiative transfer model. The clear-sky value is then corrected for the effect of clouds based on either (i) sunshine duration or (ii) pyranometer measurements. Both techniques account also for the variations in the surface albedo caused by snow, whereas aerosols are included as a typical climatological aerosol load. Using these methods, long time series of reconstructed UV radiation were produced for five European locations, namely Sodankylä and Jokioinen in Finland, Bergen in Norway, Norrköping in Sweden, and Davos in Switzerland. Both UV reconstruction techniques developed in this thesis account for the greater part of the factors affecting the amount of UV radiation reaching the Earth's surface. Thus, they are considered reliable and trustworthy, as suggested also by the good performance of the methods. The pyranometer-based method shows better performance than the sunshine-based method, especially for daily values. For monthly values, the difference between the performances of the methods is smaller, indicating that the sunshine-based method is roughly as good as the pyranometer-based for assessing long-term changes in the surface UV radiation. The time series of reconstructed UV radiation produced in this thesis provide new insight into the past UV radiation climate and how the UV radiation has varied throughout the years. Especially the sunshine-based UV time series, extending back to 1926 and 1950 at Davos and Sodankylä, respectively, also put the recent changes driven by the ozone decline observed over the last few decades into perspective. At Davos, the reconstructed UV over the period 1926-2003 shows considerable variation throughout the entire period, with high values in the mid-1940s, early 1960s, and in the 1990s. Moreover, the variations prior to 1980 were found to be caused primarily by variations in the cloudiness, while the increase of 4.5 %/decade over the period 1979-1999 was supported by both the decline in the total ozone column and changes in the cloudiness. Of the other stations included in this work, both Sodankylä and Norrköping show a clear increase in the UV radiation since the early 1980s (3-4 %/decade), driven primarily by changes in the cloudiness, and to a lesser extent by the diminution of the total ozone. At Jokioinen, a weak increase was found, while at Bergen there was no considerable overall change in the UV radiation level.