24 resultados para referendum and information campaign
em Helda - Digital Repository of University of Helsinki
Resumo:
In this Thesis, we develop theory and methods for computational data analysis. The problems in data analysis are approached from three perspectives: statistical learning theory, the Bayesian framework, and the information-theoretic minimum description length (MDL) principle. Contributions in statistical learning theory address the possibility of generalization to unseen cases, and regression analysis with partially observed data with an application to mobile device positioning. In the second part of the Thesis, we discuss so called Bayesian network classifiers, and show that they are closely related to logistic regression models. In the final part, we apply the MDL principle to tracing the history of old manuscripts, and to noise reduction in digital signals.
Resumo:
The range of consumer health and medicines information sources has diversified along with the increased use of the Internet. This has led to a drive to develop medicines information services and to better incorporate the Internet and e-mail into routine practice in health care and in community pharmacies. To support the development of such services more information is needed about the use of online information by consumers, particularly of those who may be the most likely to use and to benefit from the new sources and modes of medicines communication. This study explored the role and utilization of the Internet-based medicines information and information services in the context of a wider network of information sources accessible to the public in Finland. The overall aim was to gather information to develop better and more accessible sources of information for consumers and services to better meet the needs of consumers. Special focus was on the needs and information behavior among people with depression and using antidepressant medicines. This study applied both qualitative and quantitative methods. Consumer medicines information needs and sources were identified by analyzing the utilization of the University Pharmacy operated national drug information call center (Study I) and surveying Finnish adults (n=2348) use of the different medicines information sources (Study II). The utilization of the Internet as a source of antidepressant information among people with depression was explored by focus group discussions among people with depression and with current or past use of the antidepressant(s) (n=29, Studies III & IV). Pharmacy response to the needs of consumers in term of providing e-mail counseling was assessed by conducting a virtual pseudo customer study among the Finnish community pharmacies (n=161, Study V). Physicians and pharmacists were the primary sources of medicines information. People with mental disorders were more frequent users of telephone- and Internet-based medicines information sources and patient information leaflets than people without mental disorders. These sources were used to complement rather than replace information provided face-to-face by health professionals. People with depression used the Internet to seek facts about antidepressants, to share experiences with peers, and for the curiosity. They described that the access to online drug information was empowering. Some people reported lacking the skills necessary to assess the quality of online information. E-mail medication counseling services provided by community pharmacies were rare and varied in quality. Study results suggest that rather than discouraging the use of the Internet, health professionals should direct patients to use accurate and reliable sources of online medicines information. Health care providers, including community pharmacies should also seek to develop new ways of communicating information about medicines with consumers. This study determined that people with depression and using antidepressants need services enabling interactive communication not only with health care professionals, but also with peers. Further research should be focused on developing medicines information service facilitating communication among different patient and consumer groups.
Resumo:
The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.
Resumo:
The present research focused on motivational and personality traits measuring individual differences in the experience of negative affect, in reactivity to negative events, and in the tendency to avoid threats. In this thesis, such traits (i.e., neuroticism and dispositional avoidance motivation) are jointly referred to as trait avoidance motivation. The seven studies presented here examined the moderators of such traits in predicting risk judgments, negatively biased processing, and adjustment. Given that trait avoidance motivation encompasses reactivity to negative events and tendency to avoid threats, it can be considered surprising that this trait does not seem to be related to risk judgments and that it seems to be inconsistently related to negatively biased information processing. Previous work thus suggests that some variable(s) moderate these relations. Furthermore, recent research has suggested that despite the close connection between trait avoidance motivation and (mal)adjustment, measures of cognitive performance may moderate this connection. However, it is unclear whether this moderation is due to different response processes between individuals with different cognitive tendencies or abilities, or to the genuinely buffering effect of high cognitive ability against the negative consequences of high trait avoidance motivation. Studies 1-3 showed that there is a modest direct relation between trait avoidance motivation and risk judgments, but studies 2-3 demonstrated that state motivation moderates this relation. In particular, individuals in an avoidance state made high risk judgments regardless of their level of trait avoidance motivation. This result explained the disparity between the theoretical conceptualization of avoidance motivation and the results of previous studies suggesting that the relation between trait avoidance motivation and risk judgments is weak or nonexistent. Studies 5-6 examined threat identification tendency as a moderator for the relationship between trait avoidance motivation and negatively biased processing. However, no evidence for such moderation was found. Furthermore, in line with previous work, the results of studies 5-6 suggested that trait avoidance motivation is inconsistently related to negatively biased processing, implying that theories concerning traits and information processing may need refining. Study 7 examined cognitive ability as a moderator for the relation between trait avoidance motivation and adjustment, and demonstrated that cognitive ability moderates the relation between trait avoidance motivation and indicators of both self-reported and objectively measured adjustment. Thus, the results of Study 7 supported the buffer explanation for the moderating influence of cognitive performance. To summarize, the results showed that it is possible to find factors that consistently moderate the relations between traits and important outcomes (e.g. adjustment). Identifying such factors and studying their interplay with traits is one of the most important goals of current personality research. The present thesis contributed to this line of work in relation to trait avoidance motivation.
Resumo:
Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.
Resumo:
A population-based early detection program for breast cancer has been in progress in Finland since 1987. According to regulations during the study period 1987-2001, free of charge mammography screening was offered every second year to women aged 50-59 years. Recently, the screening service was decided to be extended to age group 50-69. However, the scope of the program is still frequently discussed in public and information about potential impacts of mass-screening practice changes on future breast cancer burden is required. The aim of this doctoral thesis is to present methodologies for taking into account the mass-screening invitation information in breast cancer burden predictions, and to present alternative breast cancer incidence and mortality predictions up to 2012 based on scenarios of the future screening policy. The focus of this work is not on assessing the absolute efficacy but the effectiveness of mass-screening, and, by utilizing the data on invitations, on showing the estimated impacts of changes in an existing screening program on the short-term predictions. The breast cancer mortality predictions are calculated using a model that combines incidence, cause-specific and other cause survival on individual level. The screening invitation data are incorporated into modeling of breast cancer incidence and survival by dividing the program into separate components (first and subsequent rounds and years within them, breaks, and post screening period) and defining a variable that gives the component of the screening program. The incidence is modeled using a Poisson regression approach and the breast cancer survival by applying a parametric mixture cure model, where the patient population is allowed to be a combination of cured and uncured patients. The patients risk to die from other causes than breast cancer is allowed to differ from that of a corresponding general population group and to depend on age and follow-up time. As a result, the effects of separate components of the screening program on incidence, proportion of cured and the survival of the uncured are quantified. According to the predictions, the impacts of policy changes, like extending the program from age group 50-59 to 50-69, are clearly visible on incidence while the effects on mortality in age group 40-74 are minor. Extending the screening service would increase the incidence of localized breast cancers but decrease the rates of non-localized breast cancer. There were no major differences between mortality predictions yielded by alternative future scenarios of the screening policy: Any policy change would have at the most a 3.0% reduction on overall breast cancer mortality compared to continuing the current practice in the near future.
Resumo:
Along with the increased life span of individuals, the burden of old age-associated diseases has inevitably increased. Alzheimer s disease (AD), probably the most well known geriatric disease, belongs to the old age-associated amyloid diseases. The purpose of this study was to investigate the frequency, genetic and health-associated risk factors, mutual association, and amyloid proteins in two old age-associated amyloid disorders senile systemic amyloidosis (SSA) and cerebral amyloid angiopathy (CAA) as part of the prospective population-based Vantaa 85+ autopsy study on a Finnish population aged 85 years or more (Studies I-III), completed with a case report on a patient with advanced AGel amyloidosis (Study IV). The numbers of patients investigated in the studies (I-III) were 256, 74, and 63, respectively. The diagnosis and grading of amyloid were based upon histological examination of tissue samples obtained post mortem and stained with Congo red. The amyloid fibril and associated proteins were characterized by immunohistochemical staining methods. The genotype frequencies of 20 polymorphisms in 9 genes and information on health-associated risk factors in subjects with and without SSA and CAA were compared. In a Finnish population ≥ 95 years of age, SSA and CAA occurred in 36% and 49% of the subjects, respectively. In total, two-thirds of these very elderly individuals had SSA, CAA, or both. However, in only 14% of the population these two conditions co-occurred. In subjects 85 years or older, the prevalence of SSA was 25%. In this population, SSA was associated with age at the time of death (p=0.002), myocardial infarctions (MIs; p=0.004), the G/G (Val/Val) genotype of the exon 24 polymorphism in the alpha2-macroglobulin (α2M) gene (p=0.042) and with the H2 haplotype of the tau gene (p=0.016). In contrast, the presence of CAA was strongly associated with APOE e4 (p=0.0003), with histopathological AD (p=0.0005), and with clinical dementia (p=0.01) in both e4+ (p=0.02) and e4- (p=0.06) individuals. Apart from demonstrating the amyloid fibril proteins, complement proteins 3d (C3d) and 9 (C9) were detected in the amyloid deposits of CAA and AGel amyloidosis, and α2M protein was found in fibrous scar tissue close to SSA. In conclusion, this first population based study on SSA shows that both SSA and CAA are common in very elderly individuals. Old age, MIs, the exon 24 polymorphism of the α2M gene, and H1/H2 polymorphism of the tau gene associate with SSA while clinical dementia and APOE ε4 genotype associate with CAA. The high prevalence of CAA, combined with its association with clinical dementia independent of APOE genotype, neuropathological AD, or SSA, also highlights its clinical significance in the very aged, among which the serious end stage complications of CAA, namely multiple infarctions and hemorrhages, are rare. The report on a patient having advanced AGel amyloidosis added knowledge on the disease and showed that this generally benign condition occasionally may lead to death. Further studies are warranted to confirm the findings in other populations. Also, the role of α2M and tau in the pathogenesis of SSA and the involvement of complement in the process of amyloid beta (Aβ) protein elimination from the brain remain to be clarified. Finally, the high prevalence of SSA in the elderly raises the need for prospective clinical studies to define its clinical significance.
Resumo:
The purpose of this study was to extend understanding of how large firms pursuing sustained and profitable growth manage organisational renewal. A multiple-case study was conducted in 27 North American and European wood-industry companies, of which 11 were chosen for closer study. The study combined the organisational-capabilities approach to strategic management with corporate-entrepreneurship thinking. It charted the further development of an identification and classification system for capabilities comprising three dimensions: (i) the dynamism between firm-specific and industry-significant capabilities, (ii) hierarchies of capabilities and capability portfolios, and (iii) their internal structure. Capability building was analysed in the context of the organisational design, the technological systems and the type of resource-bundling process (creating new vs. entrenching existing capabilities). The thesis describes the current capability portfolios and the organisational changes in the case companies. It also clarifies the mechanisms through which companies can influence the balance between knowledge search and the efficiency of knowledge transfer and integration in their daily business activities, and consequently the diversity of their capability portfolio and the breadth and novelty of their product/service range. The largest wood-industry companies of today must develop a seemingly dual strategic focus: they have to combine leading-edge, innovative solutions with cost-efficient, large-scale production. The use of modern technology in production was no longer a primary source of competitiveness in the case companies, but rather belonged to the portfolio of basic capabilities. Knowledge and information management had become an industry imperative, on a par with cost effectiveness. Yet, during the period of this research, the case companies were better in supporting growth in volume of the existing activity than growth through new economic activities. Customer-driven, incremental innovation was preferred over firm-driven innovation through experimentation. The three main constraints on organisational renewal were the lack of slack resources, the aim for lean, centralised designs, and the inward-bound communication climate.
Resumo:
Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.
Resumo:
Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.
Resumo:
The main objective of this study is to evaluate selected geophysical, structural and topographic methods on regional, local, and tunnel and borehole scales, as indicators of the properties of fracture zones or fractures relevant to groundwater flow. Such information serves, for example, groundwater exploration and prediction of the risk of groundwater inflow in underground construction. This study aims to address how the features detected by these methods link to groundwater flow in qualitative and semi-quantitative terms and how well the methods reveal properties of fracturing affecting groundwater flow in the studied sites. The investigated areas are: (1) the Päijänne Tunnel for water-conveyance whose study serves as a verification of structures identified on regional and local scales; (2) the Oitti fuel spill site, to telescope across scales and compare geometries of structural assessment; and (3) Leppävirta, where fracturing and hydrogeological environment have been studied on the scale of a drilled well. The methods applied in this study include: the interpretation of lineaments from topographic data and their comparison with aeromagnetic data; the analysis of geological structures mapped in the Päijänne Tunnel; borehole video surveying; groundwater inflow measurements; groundwater level observations; and information on the tunnel s deterioration as demonstrated by block falls. The study combined geological and geotechnical information on relevant factors governing groundwater inflow into a tunnel and indicators of fracturing, as well as environmental datasets as overlays for spatial analysis using GIS. Geophysical borehole logging and fluid logging were used in Leppävirta to compare the responses of different methods to fracturing and other geological features on the scale of a drilled well. Results from some of the geophysical measurements of boreholes were affected by the large diameter (gamma radiation) or uneven surface (caliper) of these structures. However, different anomalies indicating more fractured upper part of the bedrock traversed by well HN4 in Leppävirta suggest that several methods can be used for detecting fracturing. Fracture trends appear to align similarly on different scales in the zone of the Päijänne Tunnel. For example, similarities of patterns were found between the regional magnetic trends, correlating with orientations of topographic lineaments interpreted as expressions of fracture zones. The same structural orientations as those of the larger structures on local or regional scales were observed in the tunnel, even though a match could not be made in every case. The size and orientation of the observation space (patch of terrain at the surface, tunnel section, or borehole), the characterization method, with its typical sensitivity, and the characteristics of the location, influence the identification of the fracture pattern. Through due consideration of the influence of the sampling geometry and by utilizing complementary fracture characterization methods in tandem, some of the complexities of the relationship between fracturing and groundwater flow can be addressed. The flow connections demonstrated by the response of the groundwater level in monitoring wells to pressure decrease in the tunnel and the transport of MTBE through fractures in bedrock in Oitti, highlight the importance of protecting the tunnel water from a risk of contamination. In general, the largest values of drawdown occurred in monitoring wells closest to the tunnel and/or close to the topographically interpreted fracture zones. It seems that, to some degree, the rate of inflow shows a positive correlation with the level of reinforcement, as both are connected with the fracturing in the bedrock. The following geological features increased the vulnerability of tunnel sections to pollution, especially when several factors affected the same locations: (1) fractured bedrock, particularly with associated groundwater inflow; (2) thin or permeable overburden above fractured rock; (3) a hydraulically conductive layer underneath the surface soil; and (4) a relatively thin bedrock roof above the tunnel. The observed anisotropy of the geological media should ideally be taken into account in the assessment of vulnerability of tunnel sections and eventually for directing protective measures.
Resumo:
Information visualization is a process of constructing a visual presentation of abstract quantitative data. The characteristics of visual perception enable humans to recognize patterns, trends and anomalies inherent in the data with little effort in a visual display. Such properties of the data are likely to be missed in a purely text-based presentation. Visualizations are therefore widely used in contemporary business decision support systems. Visual user interfaces called dashboards are tools for reporting the status of a company and its business environment to facilitate business intelligence (BI) and performance management activities. In this study, we examine the research on the principles of human visual perception and information visualization as well as the application of visualization in a business decision support system. A review of current BI software products reveals that the visualizations included in them are often quite ineffective in communicating important information. Based on the principles of visual perception and information visualization, we summarize a set of design guidelines for creating effective visual reporting interfaces.
Resumo:
Wind power has grown fast internationally. It can reduce the environmental impact of energy production and increase energy security. Finland has turbine industry but wind electricity production has been slow, and nationally set capacity targets have not been met. I explored social factors that have affected the slow development of wind power in Finland by studying the perceptions of Finnish national level wind power actors. By that I refer to people who affect the development of wind power sector, such as officials, politicians, and representatives of wind industries and various organisations. The material consisted of interviews, a questionnaire, and written sources. The perceptions of wind power, its future, and methods to promote it were divided. They were studied through discourse analysis, content analysis, and scenario construction. Definition struggles affect views of the significance and potential of wind power in Finland, and also affect investments in wind power and wind power policy choices. Views of the future were demonstrated through scenarios. The views included scenarios of fast growth, but in the most pessimistic views, wind power was not thought to be competitive without support measures even in 2025, and the wind power capacity was correspondingly low. In such a scenario, policy tool choices were expected to remain similar to ones in use at the time of the interviews. So far, the development in Finland has followed closely this pessimistic scenario. Despite the scepticism about wind electricity production, wind turbine industry was seen as a credible industry. For many wind power actors as well as for the Finnish wind power policy, the turbine industry is a significant motive to promote wind power. Domestic electricity production and the export turbine industry are linked in discourse through so-called home market argumentation. Finnish policy tools have included subsidies, research and development funding, and information policies. The criteria used to evaluate policy measures were both process-oriented and value-based. Feed-in tariffs and green certificates that are common elsewhere have not been taken to use in Finland. Some interviewees considered such tools unsuitable for free electricity markets and for the Finnish policy style, dictatorial, and being against western values. Other interviewees supported their use because of their effectiveness. The current Finnish policy tools are not sufficiently effective to increase wind power production significantly. Marginalisation of wind power in discourses, pessimistic views of the future, and the view that the small consumer demand for wind electricity represents the political views of citizens towards promoting wind power, make it more difficult to take stronger policy measures to use. Wind power has not yet significantly contributed to the ecological modernisation of the energy sector in Finland, but the situation may change as the need to reduce emissions from energy production continues.