44 resultados para WEIGHTED EARLINESS
Resumo:
Suomessa sähkönjakelu on säännelty monopoli. Energiamarkkinavirasto tuottaa ohjeistuksen sekä mallin yritysten ansaintamahdollisuuksille. Karkeasti sanottuna tulomalli on sijoitetun pääoman ja pääoman painotetun kustannuksen tulo. Pääoman painotettu kustannus koostuu useista parametreista kuten beta ja vieraan pääoman riskipreemio. Näiden parametrien taso ja määrittämisajankohta perustuvat subjektiivisiin näkemyksiin, kun objektiivista parametrien määrittämismenetelmää tulisi käyttää. Nykyiset beta ja vieraan pääoman riskipreemio perustuvat energiamarkkinaviraston ja asiantuntijoiden lausuntoihin. Aihealuetta on tutkittu erittäin vähän, mikä johtunee pääasiassa siitä, ettei ole olemassa listautuneita puhtaita jakeluverkkoyhtiöitä. Betan nykytaso on 0.529 ja vieraan pääoman riskipreemio on 1.0 %. Tässä pro gradu –työssä määritetään markkinaperusteisesti betan ja vieraan pääoman riskipreemion nykytaso. Tässä työssä esiteltävä määrittämismalli perustuu puhtaasti markkinadataan eikä sen soveltamisessa käytetä subjektiivisia mielipiteitä. Markkinaehtoisia tietoja käyttäen betan pitäisi olla tasolla 0.525 ja vieraan pääoman riskipreemion tasolla 1.34 %. Nämä luvut, mikäli ne otettaisiin käyttöön, vaikuttaisivat suoraan ja positiivisesti jakeluverkkoyhtiöiden sallittuun tuottoon Suomessa.
Resumo:
Ride comfort of elevators is one of the quality criteria valued by customers. The objective of this master’s thesis was to develop a process to measure the ride comfort of automatic elevator doors. The door’s operational noise was chosen as a focus area and other kinds of noise for example caused by pressure differences in the elevator shaft were excluded. The thesis includes a theory part and an empirical part. In the first part theories of quality management, measuring of quality and acoustics are presented. In the empirical part the developed ride comfort measuring process is presented, different operational noise sources are analyzed and an example is presented of how this measuring process can be used to guide product development. To measure ride comfort a process was developed where a two-room silent room was used as a measuring environment and EVA-625 device was used in the actual measuring of door noise. A-weighted decibels were used to scale noise pressure levels and the door movement was monitored with an accelerometer. This enabled the connection between the noise and noise sources which in turn helped to find potential ride comfort improvement ideas. The noise isolation class was also measured with the Ivie-measuring system. Measuring of door ride comfort gives feedback to product development and to managing current product portfolio. Measuring enables the continuous improvement of elevator door ride comfort. The measuring results can also be used to back up marketing arguments for doors.
Resumo:
Background: Approximately two percent of Finns have sequels after traumatic brain injury (TBI), and many TBI patients are young or middle-aged. The high rate of unemployment after TBI has major economic consequences for society, and traumatic brain injury often has remarkable personal consequences, as well. Structural imaging is often needed to support the clinical TBI diagnosis. Accurate early diagnosis is essential for successful rehabilition and, thus, may also influence the patient’s outcome. Traumatic axonal injury and cortical contusions constitute the majority of traumatic brain lesions. Several studies have shown magnetic resonance imaging (MRI) to be superior to computed tomography (CT) in the detection of these lesions. However, traumatic brain injury often leads to persistent symptoms even in cases with few or no findings in conventional MRI. Aims and methods: The aim of this prospective study was to clarify the role of conventional MRI in the imaging of traumatic brain injury, and to investigate how to improve the radiologic diagnostics of TBI by using more modern diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) techniques. We estimated, in a longitudinal study, the visibility of the contusions and other intraparenchymal lesions in conventional MRI at one week and one year after TBI. We used DWI-based measurements to look for changes in the diffusivity of the normal-appearing brain in a case-control study. DTI-based tractography was used in a case-control study to evaluate changes in the volume, diffusivity, and anisotropy of the long association tracts in symptomatic TBI patients with no visible signs of intracranial or intraparenchymal abnormalities on routine MRI. We further studied the reproducibility of different tools to identify and measure white-matter tracts by using a DTI sequence suitable for clinical protocols. Results: Both the number and extent of visible traumatic lesions on conventional MRI diminished significantly with time. Slightly increased diffusion in the normal-appearing brain was a common finding at one week after TBI, but it was not significantly associated with the injury severity. Fractional anisotropy values, that represent the integrity of the white-matter tracts, were significantly diminished in several tracts in TBI patients compared to the control subjects. Compared to the cross-sectional ROI method, the tract-based analyses had better reproducibility to identify and measure white-matter tracts of interest by means of DTI tractography. Conclusions: As conventional MRI is still applied in clinical practice, it should be carried out soon after the injury, at least in symptomatic patients with negative CT scan. DWI-related brain diffusivity measurements may be used to improve the documenting of TBI. DTI tractography can be used to improve radiologic diagnostics in a symptomatic TBI sub-population with no findings on conventional MRI. Reproducibility of different tools to quantify fibre tracts vary considerably, which should be taken into consideration in the clinical DTI applications.
Resumo:
Alcohol consumption during pregnancy can potentially affect the developing fetus in devastating ways, leading to a range of physical, neurological, and behavioral alterations most accurately termed Fetal Alcohol Spectrum Disorders (FASD). Despite the fact that it is a preventable disorder, prenatal alcohol exposure today constitutes a leading cause of intellectual disability in the Western world. In Western countries where prevalence studies have been performed the rates of FASD exceed, for example, autism spectrum disorders, Down’s syndrome and cerebral palsy. In addition to the direct effects of alcohol, children and adolescents with FASD are often exposed to a double burden in life, as their neurological sequelae are accompanied by adverse living surroundings exposing them to further environmental risk. However, children with FASD today remain remarkably underdiagnosed by the health care system. This thesis forms part of a larger multinational research project, The Collaborative Initiative on Fetal Alcohol Spectrum Disorders (the CIFASD), initiated by the National Institute of Alcohol Abuse and Alcoholism (NIAAA) in the U.S.A. The general aim of the present thesis was to examine a cohort of children and adolescents growing up with fetal alcohol-related damage in Finland. The thesis consists of five studies with a broad focus on diagnosis, cognition, behavior, adaptation and brain metabolic alterations in children and adolescents with FASD. The participants consisted of four different groups: one group with histories of prenatal exposure to alcohol, the FASD group; one IQ matched contrast group mostly consisting of children with specific learning disorder (SLD); and two typically-developing control groups (CON1 and CON2). Participants were identified through medical records, random sampling from the Finnish national population registry and email alerts to students. Importantly, the participants in the present studies comprise a group of very carefully clinically characterized children with FASD as the studies were performed in close collaboration with leading experts in the field (Prof. Edward Riley and Prof. Sarah Mattson, Center for Behavioral Teratology, San Diego State University, U.S.A; Prof. Eugene Hoyme, Sanford School of Medicine, University of South Dakota, U.S.A.). In the present thesis, the revised Institute of Medicine diagnostic criteria for FASD were tested on a Finnish population and found to be a reliable tool for differentiating among the subgroups of FASD. A weighted dysmorphology scoring system proved to be a valuable additional adjunct in quantification of growth deficits and dysmorphic features in children with FASD (Study 1). The purpose of Study 2 was to clarify the relationship between alcohol-related dysmorphic features and general cognitive capacity. Results showed a significant correlation between dysmorphic features and cognitive capacity, suggesting that children with more severe growth deficiency and dysmorphic features have more cognitive limitations. This association was, however, only moderate, indicating that physical markers and cognitive capacity not always go hand in hand in individuals with FASD. Behavioral problems in the FASD group proved substantial compared to the typically developing control group. In Study 3 risk and protective factors associated with behavioral problems in the FASD group were explored further focusing on diagnostic and environmental factors. Two groups with elevated risks for behavioral problems emerged: length of time spent in residential care and a low dysmorphology score proved to be the most pervasive risk factor for behavioral problems. The results underscore the clinical importance of appropriate services and care for less visibly alcohol affected children and highlight the need to attend to children with FASD being raised in institutions. With their background of early biological and psychological impairment compounded with less opportunity for a close and continuous caregiver relationship, such children seem to run an especially great risk of adverse life outcomes. Study 4 focused on adaptive abilities such as communication, daily living skills and social skills, in other words skills that are important for gradually enabling an independent life, maintain social relationships and allow the individual to become integrated into society. The results showed that adaptive abilities of children and adolescents growing up with FASD were significantly compromised compared to both typically-developing peers and IQ-matched children with SLD. Clearly different adaptive profiles were revealed where the FASD group performed worse than the SLD group, who in turn performed worse than the CON1 group. Importantly, the SLD group outperformed the FASD group on adaptive behavior in spite of comparable cognitive levels. This is the first study to compare adaptive abilities in a group of children and adolescents with FASD relative to both a contrast group of IQ-matched children with SLD and to a group of typically-developing peers. Finally, in Study 5, through magnetic resonance spectroscopic imaging (MRS) evidence of longstanding neurochemical alterations were observed in adolescents and young adults with FASD related to alcohol exposure in utero 14-20 years earlier. Neurochemical alterations were seen in several brain areas: in frontal and parietal cortices, corpus callosum, thalamus and frontal white matter areas as well as in the cerebellar dentate nucleus. The findings are compatible with neuropsychological findings in FASD. Glial cells seemed to be more affected than neurons. In conclusion, more societal efforts and resources should be focused on recognizing and diagnosing FASD, and supporting subgroups with elevated risk of poor outcome. Without adequate intervention children and adolescents with FASD run a great risk of marginalization and social maladjustment, costly not only to society but also to the lives of the many young people with FASD.
Resumo:
Multiple sclerosis (MS) is a chronic immune-mediated inflammatory disorder of the central nervous system. MS is the most common disabling central nervous system (CNS) disease of young adults in the Western world. In Finland, the prevalence of MS ranges between 1/1000 and 2/1000 in different areas. Fabry disease (FD) is a rare hereditary metabolic disease due to mutation in a single gene coding α-galactosidase A (alpha-gal A) enzyme. It leads to multi-organ pathology, including cerebrovascular disease. Currently there are 44 patients with diagnosed FD in Finland. Magnetic resonance imaging (MRI) is commonly used in the diagnostics and follow-up of these diseases. The disease activity can be demonstrated by occurrence of new or Gadolinium (Gd)-enhancing lesions in routine studies. Diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) are advanced MR sequences which can reveal pathologies in brain regions which appear normal on conventional MR images in several CNS diseases. The main focus in this study was to reveal whether whole brain apparent diffusion coefficient (ADC) analysis can be used to demonstrate MS disease activity. MS patients were investigated before and after delivery and before and after initiation of diseasemodifying treatment (DMT). In FD, DTI was used to reveal possible microstructural alterations at early timepoints when excessive signs of cerebrovascular disease are not yet visible in conventional MR sequences. Our clinical and MRI findings at 1.5T indicated that post-partum activation of the disease is an early and common phenomenon amongst mothers with MS. MRI seems to be a more sensitive method for assessing MS disease activity than the recording of relapses. However, whole brain ADC histogram analysis is of limited value in the follow-up of inflammatory conditions in a pregnancy-related setting because the pregnancy-related physiological effects on ADC overwhelm the alterations in ADC associated with MS pathology in brain tissue areas which appear normal on conventional MRI sequences. DTI reveals signs of microstructural damage in brain white matter of FD patients before excessive white matter lesion load can be observed on conventional MR scans. DTI could offer a valuable tool for monitoring the possible effects of enzyme replacement therapy in FD.
Resumo:
Suomen hallintotuomioistuinten mittaristotyöryhmä on laatinut oikeusasioiden hallintaa parantavan käsittelyn viivästymisestä varoittavan hälytysjärjestelmän sekä luokitellut asiat työmäärään perustuen. Luokille on laadittu työmäärää kuvaavat painokertoimet. Tämän diplomityön tavoitteena on tutkia asiahallintajärjestelmään tehtyjen uudistusten mahdollistamia suorituskyvynmittauksen ja tavoiteasetannan kehittämismahdollisuuksia asiavirtauksen näkökulmasta. Tavoitteeseen pääsemiseksi selvitetään työmääräpainotuksen vaikutuksia hallinto-oikeuksien suorituskyvyn tunnuslukuihin sekä analysoidaan hallinto-oikeuksien suorituskykyä työmääräpainotuksella vuosina 2009-2012. Raportti sisältää tutkimuksen aihetta käsittelevän teoriakatsauksen ja kohdeorganisaatiota tutkivan empiirisen osuuden. Empiirinen osuus perustuu vahvasti kvantitatiiviseen tutkimukseen. Tutkimuksen tuloksena havaittiin, että työmääräpainotus kaventaa hallintooikeuksien välisiä suorituskykyeroja, mutta huomattavia eroja esiintyy myös työmääräpainotuksilla tarkasteltuna. Hallinto-oikeuksien suorituskyvyissä esiintyy eroja sekä oikeuksien välillä että oikeuksien sisällä eri vuosina. Analysoimalla nykyisiä suorituskyvyn mittauksen ja tavoiteasetannan käytäntöjä, voidaan esittää neljä kehittämisen painopistettä: 1) tavoiteasetannan tekeminen pidemmälle tähtäimelle, 2) mittauksen ja seurannan painopiste toteutuneesta ennakointiin, 3) suorituskyvyn tunnusluvut vastaamaan työmääräpainotusta ja 4) tavoitetasojen yhdenmukaistaminen. Näiden havaittujen kehittämispainopisteiden ja suorituskykyanalyysien pohjalta luotiin vaihtoehtoinen tapa mitata suorituskykyä ja asettaa tavoitteita. Kehittämisehdotusta havainnollistettiin erilaisten skenaarioiden avulla. Työ tarjoaa hyödyllistä tietoa hallinto-oikeuksien suorituskyvystä ja siitä millaisia mahdollisuuksia asianhallinnan uudistukset tuovat hallinto-oikeuksien suorituskyvyn kehittämiseen ja seurantaan.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Wind power is a low-carbon energy production form that reduces the dependence of society on fossil fuels. Finland has adopted wind energy production into its climate change mitigation policy, and that has lead to changes in legislation, guidelines, regional wind power areas allocation and establishing a feed-in tariff. Wind power production has indeed boosted in Finland after two decades of relatively slow growth, for instance from 2010 to 2011 wind energy production increased with 64 %, but there is still a long way to the national goal of 6 TWh by 2020. This thesis introduces a GIS-based decision-support methodology for the preliminary identification of suitable areas for wind energy production including estimation of their level of risk. The goal of this study was to define the least risky places for wind energy development within Kemiönsaari municipality in Southwest Finland. Spatial multicriteria decision analysis (SMCDA) has been used for searching suitable wind power areas along with many other location-allocation problems. SMCDA scrutinizes complex ill-structured decision problems in GIS environment using constraints and evaluation criteria, which are aggregated using weighted linear combination (WLC). Weights for the evaluation criteria were acquired using analytic hierarchy process (AHP) with nine expert interviews. Subsequently, feasible alternatives were ranked in order to provide a recommendation and finally, a sensitivity analysis was conducted for the determination of recommendation robustness. The first study aim was to scrutinize the suitability and necessity of existing data for this SMCDA study. Most of the available data sets were of sufficient resolution and quality. Input data necessity was evaluated qualitatively for each data set based on e.g. constraint coverage and attribute weights. Attribute quality was estimated mainly qualitatively by attribute comprehensiveness, operationality, measurability, completeness, decomposability, minimality and redundancy. The most significant quality issue was redundancy as interdependencies are not tolerated by WLC and AHP does not include measures to detect them. The third aim was to define the least risky areas for wind power development within the study area. The two highest ranking areas were Nordanå-Lövböle and Påvalsby followed by Helgeboda, Degerdal, Pungböle, Björkboda, and Östanå-Labböle. The fourth aim was to assess the recommendation reliability, and the top-ranking two areas proved robust whereas the other ones were more sensitive.
Resumo:
Since the late 1990’s, a group of moral doctrines called prioritarianism has received a lot of interest from many moral philosophers. Many contemporary moral philosophers are attracted to prioritarianism to such an extent that they can be called prioritarians. In this book, however, I reject prioritarianism, including not only “pure” prioritarianism but also hybrid prioritarian views which mix one or more non-prioritarian elements with prioritarianism. This book largely revolves around certain problems and complications of prioritarianism and its particular forms. Those problems and complications are connected to risk, impartiality, the arbitrariness of prioritarian weightings and possible future individuals. On the one hand, I challenge prioritarianism through targeted objections to various specific forms of prioritarianism. All those targeted objections are connected to risk or possible future individuals. It seems to me that together they give good grounds for believing that prioritarianism is not the way to go. On the other hand, I challenge prioritarianism by pointing out and discussing certain general problems of prioritarianism. Those general problems are connected to impartiality and the arbitrariness of prioritarian weightings. They may give additional grounds for believing that all prioritarian views should be rejected. Prioritarianism can be seen as a type of weighted utilitarianism and thus as an extension of utilitarianism. Utilitarianism is morally ultimately concerned, and morally ultimately concerned only, with some kind of maximization of utility or expected utility. Prioritarianism, on the other hand, is morally ultimately concerned, and morally ultimately concerned only, with some kind of maximization of priority-weighted utility, expected priority-weighted utility or priority-weighted expected utility. Thus prioritarianism, unlike utilitarianism, is a distribution-sensitive moral view. Besides rejecting prioritarianism, I reject also various other distribution-sensitive moral views in this book. However, I do not reject distribution-sensitivity in morality, as I end up endorsing a type of distribution-sensitive hybrid utilitarianism which mixes non-utilitarian elements with utilitarianism.
Resumo:
This thesis presents an analysis of recently enacted Russian renewable energy policy based on capacity mechanism. Considering its novelty and poor coverage by academic literature, the aim of the thesis is to analyze capacity mechanism influence on investors’ decision-making process. The current research introduces a number of approaches to investment analysis. Firstly, classical financial model was built with Microsoft Excel® and crisp efficiency indicators such as net present value were determined. Secondly, sensitivity analysis was performed to understand different factors influence on project profitability. Thirdly, Datar-Mathews method was applied that by means of Monte Carlo simulation realized with Matlab Simulink®, disclosed all possible outcomes of investment project and enabled real option thinking. Fourthly, previous analysis was duplicated by fuzzy pay-off method with Microsoft Excel®. Finally, decision-making process under capacity mechanism was illustrated with decision tree. Capacity remuneration paid within 15 years is calculated individually for each RE project as variable annuity that guarantees a particular return on investment adjusted on changes in national interest rates. Analysis results indicate that capacity mechanism creates a real option to invest in renewable energy project by ensuring project profitability regardless of market conditions if project-internal factors are managed properly. The latter includes keeping capital expenditures within set limits, production performance higher than 75% of target indicators, and fulfilling localization requirement, implying producing equipment and services within the country. Occurrence of real option shapes decision-making process in the following way. Initially, investor should define appropriate location for a planned power plant where high production performance can be achieved, and lock in this location in case of competition. After, investor should wait until capital cost limit and localization requirement can be met, after that decision to invest can be made without any risk to project profitability. With respect to technology kind, investment into solar PV power plant is more attractive than into wind or small hydro power, since it has higher weighted net present value and lower standard deviation. However, it does not change decision-making strategy that remains the same for each technology type. Fuzzy pay-method proved its ability to disclose the same patterns of information as Monte Carlo simulation. Being effective in investment analysis under uncertainty and easy in use, it can be recommended as sufficient analytical tool to investors and researchers. Apart from described results, this thesis contributes to the academic literature by detailed description of capacity price calculation for renewable energy that was not available in English before. With respect to methodology novelty, such advanced approaches as Datar-Mathews method and fuzzy pay-off method are applied on the top of investment profitability model that incorporates capacity remuneration calculation as well. Comparison of effects of two different RE supporting schemes, namely Russian capacity mechanism and feed-in premium, contributes to policy comparative studies and exhibits useful inferences for researchers and policymakers. Limitations of this research are simplification of assumptions to country-average level that restricts our ability to analyze renewable energy investment region wise and existing limitation of the studying policy to the wholesale power market that leaves retail markets and remote areas without our attention, taking away medium and small investment into renewable energy from the research focus. Elimination of these limitations would allow creating the full picture of Russian renewable energy investment profile.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Companies require information in order to gain an improved understanding of their customers. Data concerning customers, their interests and behavior are collected through different loyalty programs. The amount of data stored in company data bases has increased exponentially over the years and become difficult to handle. This research area is the subject of much current interest, not only in academia but also in practice, as is shown by several magazines and blogs that are covering topics on how to get to know your customers, Big Data, information visualization, and data warehousing. In this Ph.D. thesis, the Self-Organizing Map and two extensions of it – the Weighted Self-Organizing Map (WSOM) and the Self-Organizing Time Map (SOTM) – are used as data mining methods for extracting information from large amounts of customer data. The thesis focuses on how data mining methods can be used to model and analyze customer data in order to gain an overview of the customer base, as well as, for analyzing niche-markets. The thesis uses real world customer data to create models for customer profiling. Evaluation of the built models is performed by CRM experts from the retailing industry. The experts considered the information gained with help of the models to be valuable and useful for decision making and for making strategic planning for the future.
Resumo:
The computer game industry has grown steadily for years, and in revenues it can be compared to the music and film industries. The game industry has been moving to digital distribution. Computer gaming and the concept of business model are discussed among industrial practitioners and the scientific community. The significance of the business model concept has increased in the scientific literature recently, although there is still a lot of discussion going on on the concept. In the thesis, the role of the business model in the computer game industry is studied. Computer game developers, designers, project managers and organization leaders in 11 computer game companies were interviewed. The data was analyzed to identify the important elements of computer game business model, how the business model concept is perceived and how the growth of the organization affects the business model. It was identified that the importance of human capital is crucial to the business. As games are partly a product of creative thinking also innovation and the creative process are highly valued. The same applies to technical skills when performing various activities. Marketing and customer relationships are also considered as key elements in the computer game business model. Financing and partners are important especially for startups, when the organization is dependent on external funding and third party assets. The results of this study provide organizations with improved understanding on how the organization is built and what business model elements are weighted.
Resumo:
Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.