18 resultados para Managing Risk: Identifying and Controlling Losses and Assuming Risks from Perils
em Helda - Digital Repository of University of Helsinki
Resumo:
Fatigue and sleepiness are major causes of road traffic accidents. However, precise data is often lacking because a validated and reliable device for detecting the level of sleepiness (cf. the breathalyzer for alcohol levels) does not exist, nor does criteria for the unambiguous detection of fatigue/sleepiness as a contributing factor in accident causation. Therefore, identification of risk factors and groups might not always be easy. Furthermore, it is extremely difficult to incorporate fatigue in operationalized terms into either traffic or criminal law. The main aims of this thesis were to estimate the prevalence of fatigue problems while driving among the Finnish driving population, to explore how VALT multidisciplinary investigation teams, Finnish police, and courts recognize (and prosecute) fatigue in traffic, to identify risk factors and groups, and finally to explore the application of the Finnish Road Traffic Act (RTA), which explicitly forbids driving while tired in Article 63. Several different sources of data were used: a computerized database and the original folders of multidisciplinary teams investigating fatal accidents (VALT), the driver records database (AKE), prosecutor and court decisions, a survey of young male military conscripts, and a survey of a representative sample of the Finnish active driving population. The results show that 8-15% of fatal accidents during 1991-2001 were fatigue related, that every fifth Finnish driver has fallen asleep while driving at some point during his/her driving career, and that the Finnish police and courts punish on average one driver per day on the basis of fatigued driving (based on the data from the years 2004-2005). The main finding regarding risk factors and risk groups is that during the summer months, especially in the afternoon, the risk of falling asleep while driving is increased. Furthermore, the results indicate that those with a higher risk of falling asleep while driving are men in general, but especially young male drivers including military conscripts and the elderly during the afternoon hours and the summer in particular; professional drivers breaking the rules about duty and rest hours; and drivers with a tendency to fall asleep easily. A time-of-day pattern of sleep-related incidents was repeatedly found. It was found that VALT teams can be considered relatively reliable when assessing the role of fatigue and sleepiness in accident causation; thus, similar experts might be valuable in the court process as expert witnesses when fatigue or sleepiness are suspected to have a role in an accident’s origins. However, the application of Article 63 of the RTA that forbids, among other things, fatigued driving will continue to be an issue that deserves further attention. This should be done in the context of a needed attitude change towards driving while in a state of extreme tiredness (e.g., after being awake for more than 24 hours), which produces performance deterioration comparable to illegal intoxication (BAC around 0.1%). Regarding the well-known interactive effect of increased sleepiness and even small alcohol levels, the relatively high proportion (up to 14.5%) of Finnish drivers owning and using a breathalyzer raises some concern. This concern exists because these drivers are obviously more focused on not breaking the “magic” line of 0.05% BAC than being concerned about driving impairment, which might be much worse than they realize because of the interactive effects of increased sleepiness and even low alcohol consumption. In conclusion, there is no doubt that fatigue and sleepiness problems while driving are common among the Finnish driving population. While we wait for the invention of reliable devices for fatigue/sleepiness detection, we should invest more effort in raising public awareness about the dangerousness of fatigued driving and educate drivers about how to recognize and deal with fatigue and sleepiness when they ultimately occur.
Resumo:
The overall objective of this study was to gain epidemiological knowledge about pain among employee populations. More specifically, the aims were to assess the prevalence of pain, to identify socio-economic risk groups and work-related psychosocial risk factors, and to assess the consequences in terms of health-related functioning and sickness absence. The study was carried out among the municipal employees of the City of Helsinki. Data comprised questionnaire survey conducted in years 2000-2002 and register data on sickness absence. Altogether 8960 40-60 year old employees participated to the survey (response rate 67%). Pain is common among ageing employees. Approximately 29 per cent of employees reported chronic pain and 15 per cent acute pain, and about seven per cent reported moderately or severely limiting disabling chronic pain. Pain was more common among those with lower level of education or in a low occupational class. -- Psychosocial work environment was associated with pain reports. Job strain, bullying at workplace, and problems in combining work and home duties were associated with pain among women. Among men combining work and home duties was not associated with pain, whereas organizational injustice showed associations. Pain affects functional capacity and predicts sickness absence. Those with pain reported lower level of both mental and physical functioning than those with no pain, physical functioning being more strongly affected than mental. Bodily location of pain or whether pain was acute or chronic had only minor impact on the variation in functioning, whereas the simple count of painful locations was associated with widest variation. Pain accounted for eight per cent of short term (1-3 day) sickness absence spells among men and 13 per cent among women. Of absence spells lasting between four and 14 days pain accounted for 23 per cent among women and 25 per cent among men, corresponding figures for over 14 day absence spells being 37 and 30 per cent. The association between pain and sickness absence was relatively independent of physical and psychosocial work factors, especially among women. The results of this study provide a picture of the epidemiology of pain among employees. Pain is a significant problem that seriously affects work ability. Information on risk groups can be utilized to make prevention measures more effective among those at high risk, and to decrease pain rates and thereby narrow the differences between socio-economic groups. Furthermore, the work-related psychosocial risk factors identified in this study are potentially modifiable, and it should be possible to target interventions on decreasing pain rates among employees.
Resumo:
Sprouting of fast-growing broad-leaved trees causes problems in young coniferous stands, under power transmission lines and along roads and railways. Public opinion and the Finnish Forest Certification System oppose the use of chemical herbicides to control sprouting, which means that most areas with problems rely on mechanical cutting. However, cutting is a poor control method for many broad-leaved species because the removal of leaders can stimulate the sprouting of side branches and cut stumps quickly re-sprout. In order to be effective, cutting must be carried out frequently but each cut increases the costs, making this control method increasingly difficult and expensive once begun. As such, alternative methods for sprout control that are both effective and environmentally sound represent a continuing challenge to managers and research biologists. Using biological control agents to prevent sprouting has been given serious consideration recently. Dutch and Canadian researchers have demonstrated the potential of the white-rot fungus Chondrostereum purpureum (Pers. ex Fr.) Pouzar as a control agent of stump sprouting in many hardwoods. These findings have focused the attention of the Finnish forestry community on the utilization of C. purpureum for biocontrol purposes. Primarily, this study sought determines the efficacy of native C. purpureum as an inhibitor of birch stump sprouting in Finland and to clarify its mode of action. Additionally, genotypic variation in Finnish C. purpureum was examined and the environmental risks posed by a biocontrol program using this fungus were assessed. Experimental results of the study demonstrated that C. purpureum clearly affects the sprouting of birch: both the frequency of living stumps and the number of living sprouts per stump were effectively reduced by the treatment. However, the treatment had no effect on the maximum height of new sprouts. There were clear differences among fungal isolates in preventing sprouting and those that possessed high oxidative activities as measured in the laboratory inhibited sprouting most efficiently in the field. The most effective treatment time during the growing season was in early and mid summer (May July). Genetic diversity in Nordic and Baltic populations of C. purpureum was found to be high at the regional scale but locally homogeneous. This natural distribution of diversity means that using local genotypes in biocontrol programs would effectively prevent the introduction of novel genes or genotypes. While a biocontrol program using local strains of C. purpureum would be environmentally neutral, pruned birches that are close to the treatment site would have a high susceptibility to infect by the fungus during the early spring.
Resumo:
Placental abruption, one of the most significant causes of perinatal mortality and maternal morbidity, occurs in 0.5-1% of pregnancies. Its etiology is unknown, but defective trophoblastic invasion of the spiral arteries and consequent poor vascularization may play a role. The aim of this study was to define the prepregnancy risk factors of placental abruption, to define the risk factors during the index pregnancy, and to describe the clinical presentation of placental abruption. We also wanted to find a biochemical marker for predicting placental abruption early in pregnancy. Among women delivering at the University Hospital of Helsinki in 1997-2001 (n=46,742), 198 women with placental abruption and 396 control women were identified. The overall incidence of placental abruption was 0.42%. The prepregnancy risk factors were smoking (OR 1.7; 95% CI 1.1, 2.7), uterine malformation (OR 8.1; 1.7, 40), previous cesarean section (OR 1.7; 1.1, 2.8), and history of placental abruption (OR 4.5; 1.1, 18). The risk factors during the index pregnancy were maternal (adjusted OR 1.8; 95% CI 1.1, 2.9) and paternal smoking (2.2; 1.3, 3.6), use of alcohol (2.2; 1.1, 4.4), placenta previa (5.7; 1.4, 23.1), preeclampsia (2.7; 1.3, 5.6) and chorioamnionitis (3.3; 1.0, 10.0). Vaginal bleeding (70%), abdominal pain (51%), bloody amniotic fluid (50%) and fetal heart rate abnormalities (69%) were the most common clinical manifestations of placental abruption. Retroplacental blood clot was seen by ultrasound in 15% of the cases. Neither bleeding nor pain was present in 19% of the cases. Overall, 59% went into preterm labor (OR 12.9; 95% CI 8.3, 19.8), and 91% were delivered by cesarean section (34.7; 20.0, 60.1). Of the newborns, 25% were growth restricted. The perinatal mortality rate was 9.2% (OR 10.1; 95% CI 3.4, 30.1). We then tested selected biochemical markers for prediction of placental abruption. The median of the maternal serum alpha-fetoprotein (MSAFP) multiples of median (MoM) (1.21) was significantly higher in the abruption group (n=57) than in the control group (n=108) (1.07) (p=0.004) at 15-16 gestational weeks. In multivariate analysis, elevated MSAFP remained as an independent risk factor for placental abruption, adjusting for parity ≥ 3, smoking, previous placental abruption, preeclampsia, bleeding in II or III trimester, and placenta previa. MSAFP ≥ 1.5 MoM had a sensitivity of 29% and a false positive rate of 10%. The levels of the maternal serum free beta human chorionic gonadotrophin MoM did not differ between the cases and the controls. None of the angiogenic factors (soluble endoglin, soluble fms-like tyrosine kinase 1, or placental growth factor) showed any difference between the cases (n=42) and the controls (n=50) in the second trimester. The levels of C-reactive protein (CRP) showed no difference between the cases (n=181) and the controls (n=261) (median 2.35 mg/l [interquartile range {IQR} 1.09-5.93] versus 2.28 mg/l [IQR 0.92-5.01], not significant) when tested in the first trimester (mean 10.4 gestational weeks). Chlamydia pneumoniae specific immunoglobulin G (IgG) and immunoglobulin A (IgA) as well as C. trachomatis specific IgG, IgA and chlamydial heat-shock protein 60 antibody rates were similar between the groups. In conclusion, although univariate analysis identified many prepregnancy risk factors for placental abruption, only smoking, uterine malformation, previous cesarean section and history of placental abruption remained significant by multivariate analysis. During the index pregnancy maternal alcohol consumption and smoking and smoking by the partner turned out to be the major independent risk factors for placental abruption. Smoking by both partners multiplied the risk. The liberal use of ultrasound examination contributed little to the management of women with placental abruption. Although second-trimester MSAFP levels were higher in women with subsequent placental abruption, clinical usefulness of this test is limited due to low sensitivity and high false positive rate. Similarly, angiogenic factors in early second trimester, or CRP levels, or chlamydial antibodies in the first trimester failed to predict placental abruption.
Resumo:
Children with intellectual disability are at increased risk for emotional and behavioural problems, but many of these disturbances fail to be diagnosed. Structured checklists have been used to supplement the psychiatric assessment of children without intellectual disability, but for children with intellectual disability, only a few checklists are available. The aim of the study was to investigate psychiatric disturbances among children with intellectual disability: the prevalence, types and risk factors of psychiatric disturbances as well as the applicability of the Finnish translations of the Developmental Behaviour Checklist (DBC-P) and the Child Behavior Checklist (CBCL) in the assessment of psychopathology. The subjects comprised 155 children with intellectual disability, and data were obtained from case records and five questionnaires completed by the parents or other carers of the child. According to case records, a psychiatric disorder had previously been diagnosed in 11% of the children. Upon careful re-examination of case records, the total proportion of children with a psychiatric disorder increased to 33%. According to checklists, the frequency of probable psychiatric disorder was 34% by the DBC-P, and 43% by the CBCL. The most common diagnoses were pervasive developmental disorders and hyperkinetic disorders. The results support previous findings that compared with children without intellectual disability, the risk of psychiatric disturbances is 2-3-fold in children with intellectual disability. The risk of psychopathology was most significantly increased by moderate intellectual disability and low socio-economic status, and decreased by adaptive behaviour, language development, and socialisation as well as living with both biological parents. The results of the study suggest that both the DBC-P and the CBCL can be used to discriminate between children with intellectual disability with and without emotional or psychiatric disturbance. The DBC-P is suitable for children with any degree of intellectual disability, and the CBCL is suitable at least for children with mild intellectual disability. Because the problems of children with intellectual disability differ somewhat from those of children without intellectual disability, checklists designed specifically for children with intellectual disability are needed.
Resumo:
The metabolic syndrome and type 1 diabetes are associated with brain alterations such as cognitive decline brain infarctions, atrophy, and white matter lesions. Despite the importance of these alterations, their pathomechanism is still poorly understood. This study was conducted to investigate brain glucose and metabolites in healthy individuals with an increased cardiovascular risk and in patients with type 1 diabetes in order to discover more information on the nature of the known brain alterations. We studied 43 20- to 45-year-old men. Study I compared two groups of non-diabetic men, one with an accumulation of cardiovascular risk factors and another without. Studies II to IV compared men with type 1 diabetes (duration of diabetes 6.7 ± 5.2 years, no microvascular complications) with non-diabetic men. Brain glucose, N-acetylaspartate (NAA), total creatine (tCr), choline, and myo-inositol (mI) were quantified with proton magnetic resonance spectroscopy in three cerebral regions: frontal cortex, frontal white matter, thalamus, and in cerebellar white matter. Data collection was performed for all participants during fasting glycemia and in a subgroup (Studies III and IV), also during a hyperglycemic clamp that increased plasma glucose concentration by 12 mmol/l. In non-diabetic men, the brain glucose concentration correlated linearly with plasma glucose concentration. The cardiovascular risk group (Study I) had a 13% higher plasma glucose concentration than the control group, but no difference in thalamic glucose content. The risk group thus had lower thalamic glucose content than expected. They also had 17% increased tCr (marker of oxidative metabolism). In the control group, tCr correlated with thalamic glucose content, but in the risk group, tCr correlated instead with fasting plasma glucose and 2-h plasma glucose concentration in the oral glucose tolerance test. Risk factors of the metabolic syndrome, most importantly insulin resistance, may thus influence brain metabolism. During fasting glycemia (Study II), regional variation in the cerebral glucose levels appeared in the non-diabetic subjects but not in those with diabetes. In diabetic patients, excess glucose had accumulated predominantly in the white matter where the metabolite alterations were also the most pronounced. Compared to the controls values, the white matter NAA (marker of neuronal metabolism) was 6% lower and mI (glia cell marker) 20% higher. Hyperglycemia is therefore a potent risk factor for diabetic brain disease and the metabolic brain alterations may appear even before any peripheral microvascular complications are detectable. During acute hyperglycemia (Study III), the increase in cerebral glucose content in the patients with type 1 diabetes was, dependent on brain region, between 1.1 and 2.0 mmol/l. An every-day hyperglycemic episode in a diabetic patient may therefore as much as double brain glucose concentration. While chronic hyperglycemia had led to accumulation of glucose in the white matter, acute hyperglycemia burdened predominantly the gray matter. Acute hyperglycemia also revealed that chronic fluctuation in blood glucose may be associated with alterations in glucose uptake or in metabolism in the thalamus. The cerebellar white matter appeared very differently from the cerebral (Study IV). In the non-diabetic men it contained twice as much glucose as the cerebrum. Diabetes had altered neither its glucose content nor the brain metabolites. The cerebellum seems therefore more resistant to the effects of hyperglycemia than is the cerebrum.
Resumo:
Background: Otitis media (OM) is one of the most common childhood diseases. Approximately every third child suffers from recurrent acute otitis media (RAOM), and 5% of all children have persistent middle ear effusion for months during their childhood. Despite numerous studies on the prevention and treatment of OM during the past decades, its management remains challenging and controversial. In this study, the effect of adenoidectomy on the risk for OM, the potential risk factors influencing the development of OM and the frequency of asthma among otitis-prone children were investigated. Subjects and methods: One prospective randomized trial and two retrospective studies were conducted. In the prospective trial, 217 children with RAOM or chronic otitis media with effusion (COME) were randomized to have tympanostomy with or without adenoidectomy. The age of the children at recruitment was between 1 and 4 years. RAOM was defined as having at least 3 episodes of AOM during the last 6 months or at least 5 episodes of AOM during the last 12 months. COME was defined as having persistent middle ear effusion for 2-3 months. The children were followed up for one year. In the first retrospective study, the frequency of childhood infections and allergy was evaluated by a questionnaire among 819 individuals. In the second retrospective study, data of asthma diagnosis were analysed from hospital discharge records of 1616 children who underwent adenoidectomy or had probing of the nasolacrimal duct. Results: In the prospective randomized study, adenoidectomy had no beneficial effect on the prevention of subsequent episodes of AOM. Parental smoking was found to be a significant risk factor for OM even after the insertion of tympanostomy tubes. The frequencies of exposure to tobacco smoke and day-care attendance at the time of randomization were similar among children with RAOM and COME. However, the frequencies of allergy to animal dust and pollen and parental asthma were lower among children with COME than those with RAOM. The questionnaire survey and the hospital discharge data revealed that children who had frequent episodes of OM had an increased risk for asthma. Conclusions: The first surgical intervention to treat an otitis-prone child younger than 4 years should not include adenoidectomy. Interventions to stop parental smoking could significantly reduce the risk for childhood RAOM. Whether an otitis-prone child develops COME or RAOM, seems to be influenced by genetic predisposition more strongly than by environmental risk factors. Children who suffer from repeated upper respiratory tract infections, like OM, may be at increased risk for developing asthma.
Resumo:
The main purpose of revascularization procedures for critical limb ischaemia (CLI) is to preserve the leg and sustain the patient s ambulatory status. Other goals are ischaemic pain relief and healing of ischaemic ulcers. Patients with CLI are usually old and have several comorbidities affecting the outcome. Revascularization for CLI is meaningless unless both life and limb are preserved. Therefore, the knowledge of both patient- and bypass-related risk factors is of paramount importance in clinical decision-making, patient selection and resource allocation. The aim of this study was to identify patient- and graft-related predictors of impaired outcome after infrainguinal bypass for CLI. The purpose was to assess the outcome of high-risk patients undergoing infrainguinal bypass and to evaluate the usefulness of specific risk scoring methods. The results of bypasses in the absence of optimal vein graft material were also evaluated, and the feasibility of the new method of scaffolding suboptimal vein grafts was assessed. The results of this study showed that renal insufficiency - not only renal failure but also moderate impairment in renal function - seems to be a significant risk factor for both limb loss and death after infrainguinal bypass in patients with CLI. Low estimated GFR (PIENEMPI KUIN 30 ml/min/1.73 m2) is a strong independent marker of poor prognosis. Furthermore, estimated GFR is a more accurate predictor of survival and leg salvage after infrainguinal bypass in CLI patients than serum creatinine level alone. We also found out that the life expectancy of octogenarians with CLI is short. In this patient group endovascular revascularization is associated with a better outcome than bypass in terms of survival, leg salvage and amputation-free survival especially in presence of coronary artery disease. This study was the first one to demonstrate that Finnvasc and modified Prevent III risk scoring methods both predict the long-term outcome of patients undergoing both surgical and endovascular infrainguinal revascularization for CLI. Both risk scoring methods are easy to use and might be helpful in clinical practice as an aid in preoperative patient selection and decision-making. Similarly than in previous studies, we found out that a single-segment great saphenous vein graft is superior to any other autologous vein graft in terms of mid-term patency and leg salvage. However, if optimal vein graft is lacking, arm vein conduits are superior to prosthetic grafts especially in infrapopliteal bypasses for CLI. We studied also the new method of scaffolding suboptimal quality vein grafts and found out that this method may enable the use of vein grafts of compromised quality otherwise unsuitable for bypass grafting. The remarkable finding was that patients with the combination of high operative risk due to severe comorbidities and risk graft have extremely poor survival, suggesting that only relatively fit patients should undergo complex bypasses with risk grafts. The results of this study can be used in clinical practice as an aid in preoperative patient selection and decision-making. In the future, the need of vascular surgery will increase significantly as the elderly and diabetic population increases, which emphasises the importance of focusing on those patients that will gain benefit from infrainguinal bypass. Therefore, the individual risk of the patient, ambulatory status, outcome expectations, the risk of bypass procedure as well as technical factors such as the suitability of outflow anatomy and the available vein material should all be assessed and taken into consideration when deciding on the best revascularization strategy.
Resumo:
Due to the improved prognosis of many forms of cancer, an increasing number of cancer survivors are willing to return to work after their treatment. It is generally believed, however, that people with cancer are either unemployed, stay at home, or retire more often than people without cancer. This study investigated the problems that cancer survivors experience on the labour market, as well as the disease-related, sociodemographic and psychosocial factors at work that are associated with the employment and work ability of cancer survivors. The impact of cancer on employment was studied combining the data of Finnish Cancer Registry and census data of the years 1985, 1990, 1995 or 1997 of Statistics Finland. There were two data sets containing 46 312 and 12 542 people with cancer. The results showed that cancer survivors were slightly less often employed than their referents. Two to three years after the diagnosis the employment rate of the cancer survivors was 9% lower than that of their referents (64% vs. 73%), whereas the employment rate was the same before the diagnosis (78%). The employment rate varied greatly according to the cancer type and education. The probability of being employed was greater in the lower than in the higher educational groups. People with cancer were less often employed than people without cancer mainly because of their higher retirement rate (34% vs. 27%). As well as employment, retirement varied by cancer type. The risk of retirement was twofold for people having cancer of the nervous system or people with leukaemia compared to their referents, whereas people with skin cancer, for example, did not have an increased risk of retirement. The aim of the questionnaire study was to investigate whether the work ability of cancer survivors differs from that of people without cancer and whether cancer had impaired their work ability. There were 591 cancer survivors and 757 referents in the data. Even though current work ability of cancer survivors did not differ between the survivors and their referents, 26% of cancer survivors reported that their physical work ability, and 19% that their mental work ability had deteriorated due to cancer. The survivors who had other diseases or had had chemotherapy, most often reported impaired work ability, whereas survivors with a strong commitment to their work organization, or a good social climate at work, reported impairment less frequently. The aim of the other questionnaire study containing 640 people with the history of cancer was to examine extent of social support that cancer survivors needed, and had received from their work community. The cancer survivors had received most support from their co-workers, and they hoped for more support especially from the occupational health care personnel (39% of women and 29% of men). More support was especially needed by men who had lymphoma, had received chemotherapy or had a low education level. The results of this study show that the majority of the survivors are able to return to work. There is, however, a group of cancer survivors who leave work life early, have impaired work ability due to their illness, and suffer from lack of support from their work place and the occupational health services. Treatment-related, as well as sociodemographic factors play an important role in survivors' work-related problems, and presumably their possibilities to continue working.
Resumo:
The terrestrial export of dissolved organic matter (DOM) is associated with climate, vegetation and land use, and thus is under the influence of climatic variability and human interference with terrestrial ecosystems, their soils and hydrological cycles. The present study provides an assessment of spatial variation of DOM concentrations and export, and interactions between DOM, catchment characteristics, land use and climatic factors in boreal catchments. The influence of catchment characteristics, land use and climatic drivers on the concentrations and export of total organic carbon (TOC), total organic nitrogen (TON) and dissolved organic phosphorus (DOP) was estimated using stream water quality, forest inventory and climatic data from 42 Finnish pristine forested headwater catchments, and water quality monitoring, GIS land use, forest inventory and climatic data from the 36 main Finnish rivers (and their sub-catchments) flowing to the Baltic Sea. Moreover, the export of DOM in relation to land use along a European climatic gradient was studied using river water quality and land use data from four European areas. Additionally, the role of organic and minerogenic acidity in controlling pH levels in Finnish rivers and pristine streams was studied by measuring organic anion, sulphate (SO4) and base cation (Ca, Mg, K and Na) concentrations. In all study catchments, TOC was a major fraction of DOM, with much lower proportions of TON and DOP. Moreover, most of TOC and TON was in a dissolved form. The correlation between TOC and TON concentrations was strong and TOC concentrations explained 78% of the variation in TON concentrations in pristine headwater streams. In a subgroup of 20 headwater catchments with similar climatic conditions and low N deposition in eastern Finland, the proportion of peatlands in the catchment and the proportion of Norway spruce (Picea abies Karsten) of the tree stand had the strongest correlation with the TOC and TON concentrations and export. In Finnish river basins, TOC export increased with the increasing proportion of peatland in the catchment, whereas TON export increased with increasing extent of agricultural land. The highest DOP concentrations and export were recorded in river basins with a high extent of agricultural land and urban areas, reflecting the influence of human impact on DOP loads. However, the most important predictor for TOC, TON and DOP export in Finnish rivers was the proportion of upstream lakes in the catchment. The higher the upstream lake percentage, the lower the export indicating organic matter retention in lakes. Molar TOC:TON ratio decreased from headwater catchments covered by forests and peatlands to the large river basins with mixed land use, emphasising the effect of the land use gradient on the stoichiometry of rivers. This study also demonstrated that the land use of the catchments is related to both organic and minerogenic acidity in rivers and pristine headwater streams. Organic anion dominated in rivers and streams situated in northern Finland, reflecting the higher extent of peatlands in these areas, whereas SO4 dominated in southern Finland and on western coastal areas, where the extent of fertile areas, agricultural land, urban areas, acid sulphate soils, and sulphate deposition is highest. High TOC concentrations decreased pH values in the stream and river water, whereas no correlation between SO4 concentrations and pH was observed. This underlines the importance of organic acids in controlling pH levels in Finnish pristine headwater streams and main rivers. High SO4 concentrations were associated with high base cation concentrations and fertile areas, which buffered the effects of SO4 on pH.
Resumo:
The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.
Resumo:
The aim was to analyse the growth and compositional development of the receptive and expressive lexicons between the ages 0,9 and 2;0 in the full-term (FT) and the very-low-birth-weight (VLBW) children who are acquiring Finnish. The associations between the expressive lexicon and grammar at 1;6 and 2;0 in the FT children were also studied. In addition, the language skills of the VLBW children at 2;0 were analysed, as well as the predictive value of early lexicon to the later language performance. Four groups took part in the studies: the longitudinal (N = 35) and cross-sectional (N = 146) samples of the FT children, and the longitudinal (N = 32) and cross-sectional (N = 66) samples of VLBW children. The data was gathered by applying of the structured parental rating method (the Finnish version of the Communicative Development Inventory), through analysis of the children´s spontaneous speech and by administering a a formal test (Reynell Developmental Language Scales). The FT children acquired their receptive lexicons earlier, at a faster rate and with larger individual variation than their expressive lexicons. The acquisition rate of the expressive lexicon increased from slow to faster in most children (91%). Highly parallel developmental paths for lexical semantic categories were detected in the receptive and expressive lexicons of the Finnish children when they were analysed in relation to the growth of the lexicon size, as described in the literature for children acquiring other languages. The emergence of grammar was closely associated with expressive lexical growth. The VLBW children acquired their receptive lexicons at a slower rate and had weaker language skills at 2;0 than the full-term children. The compositional development of both lexicons happened at a slower rate in the VLBW children when compared to the FT controls. However, when the compositional development was analysed in relation to the growth of lexicon size, this development occurred qualitatively in a nearly parallel manner in the VLBW children as in the FT children. Early receptive and expressive lexicon sizes were significantly associated with later language skills in both groups. The effect of the background variables (gender, length of the mother s basic education, birth weight) on the language development in the FT and the VLBW children differed. The results provide new information of early language acquisition by the Finnish FT and VLBW children. The results support the view that the early acquisition of the semantic lexical categories is related to lexicon growth. The current findings also propose that the early grammatical acquisition is closely related to the growth of expressive vocabulary size. The language development of the VLBW children should be followed in clinical work.
Resumo:
This thesis summarises the results of four original papers concerning U-Pb geochronology and geochemical evolution of Archaean rocks from the Kuhmo terrain and the Nurmes belt, eastern Finland. The study area belongs to a typical Archaean granite-greenstone terrain, composed of metavolcanic and metasedimentary rocks in generally N-S trending greenstone belts as well as a granitoid-gneiss complex with intervening gneissic and migmatised supracrustal and plutonic rocks. U-Pb data on migmatite mesosomes indicate that the crust surrounding the Tipasjärvi-Kuhmo-Suomussalmi greenstone belt is of varying age. The oldest protolith detected for a migmatite mesosome from the granitoid-gneiss complex is 2.94 Ga, whereas the other dated migmatites protoliths have ages of 2.84 2.79 Ga. The latter protoliths are syngenetic with the majority of volcanic rocks in the adjacent Tipasjärvi-Kuhmo-Suomussalmi greenstone belt. This suggests that the genesis of some of the volcanic rocks within the greenstone belt and surrounding migmatite protoliths could be linked. Metamorphic zircon overgrowths with ages of 2.84 2.81 Ga were also obtained. The non-migmatised plutonic rocks in the Kuhmo terrain and in the Nurmes belt record secular geochemical evolution, typical of Archaean cratons. The studied tonalitic rocks have ages of 2.83 2.75 Ga and they have geochemical characteristics similar to low-Al and high-Al TTD (tonalite-trondhjemite-dacite). The granodiorites, diorites, and gabbros with high Mg/Fe and LILE-enriched characteristics were mostly emplaced between 2.74 2.70 Ga and they exhibit geochemical characteristics typical of Archaean sanukitoid suites. The latest identified plutonic episode took place at 2.70 2.68 Ga, when compositionally heterogeneous leucocratic granitoid rocks, with a variable crustal component, were emplaced. U-Pb data on migmatite leucosomes suggest that leucosome generation may have been coeval with this latest plutonic event. On the basis of available U-Pb and Sm-Nd isotopic data it appears that the plutonic rocks of the Kuhmo terrain and the Nurmes belt do not contain any significant input from Palaeoarchaean sources. A characteristic feature of the Nurmes belt is the presence of migmatised paragneisses, locally preserving primary edimentary structures, with sporadic amphibolite intercalations. U-Pb studies on zircons indicate that the precursors of the Nurmes paragneisses were graywackes that were deposited between 2.71 Ga and 2.69 Ga and that they had a prominent 2.75 2.70 Ga source. Nd isotopic and whole-rock geochemical data for the intercalated amphibolites imply MORB sources. U-Pb data on zircons from the plutonic rocks and paragneisses reveal that metamorphic zircon growth took place at 2.72 2.63 Ga. This was the last tectonothermal event related to cratonisation of the Archaean crust of eastern Finland.
Resumo:
Regional autonomy in Indonesia was initially introduced as a means of pacifying regional disappointment at the central government. Not only did the Regional Autonomy Law of 1999 give the Balinese a chance to express grievance regarding the centralist policies of the Jakarta government but also provided an opportunity to return to the regional, exclusive, traditional village governance (desa adat). As a result, the problems faced by the island, particularly ethnic conflicts, are increasingly handled by the mechanism of this traditional type of governance. Traditional village governance with regard to ethnic conflicts (occurring) between Balinese and migrants has never been systematically analyzed. Existing analyses emphasized only the social context, but do not explain either the cause of conflicts and the ensuing problems entails or the virtues of traditional village governance mechanisms for mediating in the conflict. While some accounts provide snapshots, they lack both theoretical and conflict study perspective. The primary aim of this dissertation is to explore the expression and the causes of conflict between the Balinese and migrants and to advance the potential of traditional village governance as a means of conflict resolution with particular reference to the municipality of Denpasar. One conclusion of the study is that the conflict between the Balinese and migrants has been expressed on the level of situation/contradiction, attitudes, and behavior. Yet the driving forces behind the conflict itself consist of the following factors: absence of cooperation; incompatible position and perception; inability to communicate effectively; and problem of inequality and injustice, which comes to the surface as a social, cultural, and economic problem. This complex of factors fuels collective fear for the future of both groups. The study concludes that traditional village governance mechanisms as a means of conflict resolution have not yet been able to provide an enduring resolution for the conflict. Analysis shows that the practice of traditional village governance is unable to provide satisfactory mechanisms for the conflict as prescribed by conflict resolution theory. Traditional village governance, which is derived from the exclusive Hindu-Balinese culture, is accepted as more legitimate among the Balinese than the official governance policies. However, it is not generally accepted by most of the Muslim migrants. In addition, traditional village governance lacks access to economic instruments, which weakens its capacity to tackle the economic roots of the conflict. Thus the traditional mechanisms of migrant ordinance , as practiced by the traditional village governance have not yet been successful in penetrating all aspects of the conflict. Finally, one of the main challenges for traditional village governance s legal development is the creation of a regional legal system capable of accommodating rapid changes in line with the national and international legal practices. The framing of the new laws should be responsive to the aspirations of a changing society. It should not only protect the various Balinese communities interests, but also that of other ethnic groups, especially those of the minority. In other words, the main challenge to traditional village governance is its ability to develop flexibility and inclusiveness.