20 resultados para Computational Intelligence in data-driven and hybrid Models and Data Analysis
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The main strengths of professional knowledge-intensive business services (P-KIBS) are knowledge and creativity which needs to be fostered, maintained and supported. The process of managing P-KIBS companies deals with financial, operational and strategic risks. That is why it is reasonable to apply risk management techniques and frameworks in this context. A significant challenge hides in choosing reasonable ways of implementing risk management, which will not limit creative ability in organization, and furthermore will contribute to the process. This choice is related to a risk intelligent approach which becomes a justified way of finding the required balance. On a theoretical level the field of managing both creativity and risk intelligence as a balanced process remains understudied in particular within KIBS industry. For instance, there appears to be a wide range of separate models for innovation and risk management, but very little discussion in terms of trying to find the right balance between them. This study aims to shed light on the importance of well-managed combination of these concepts. The research purpose of the present study is to find out how the balance between creativity and risk intelligence can be managed in P-KIBS. The methodological approach utilized in the study is strictly conceptual without empirical aspects. The research purpose can be achieved through answering the following research supporting questions: 1. What are the characteristics and role of creativity as a component of innovation process in a P-KIBS company? 2. What are the characteristics and role of risk intelligence as an approach towards risk management process implementation in a P-KIBS company? 3. How can risk intelligence and creativity be balanced in P-KIBS? The main theoretical contribution of the study conceals in a proposed creativity and risk intelligence stage process framework. It is designed as an algorithm that can be applied on organizational canvas. It consists of several distinct stages specified by actors involved, their roles and implications. Additional stage-wise description provides detailed tasks for each of the enterprise levels, while combining strategies into one. The insights driven from the framework can be utilized by a vast range of specialists from strategists to risk managers, and from innovation managers to entrepreneurs. Any business that is designing and delivering knowledge service can potentially gain valuable thoughts and expand conceptual understanding from the present report. Risk intelligence in the current study is a unique way of emphasizing the role of creativity in professional knowledge-intensive industry and a worthy technique for making profound decisions towards risks.
Resumo:
Centrifugal compressors are widely used for example in process industry, oil and gas industry, in small gas turbines and turbochargers. In order to achieve lower consumption of energy and operation costs the efficiency of the compressor needs to be improve. In the present work different pinches and low solidity vaned diffusers were utilized in order to improve the efficiency of a medium size centrifugal compressor. In this study, pinch means the decrement of the diffuser flow passage height. First different geometries were analyzed using computational fluid dynamics. The flow solver Finflo was used to solve the flow field. Finflo is a Navier-Stokes solver. The solver is capable to solve compressible, incompressible, steady and unsteady flow fields. Chien's k-e turbulence model was used. One of the numerically investigated pinched diffuser and one low solidity vaned diffuser were studied experimentally. The overall performance of the compressor and the static pressure distribution before and after the diffuser were measured. The flow entering and leaving the diffuser was measured using a three-hole Cobra-probe and Kiel-probes. The pinch and the low solidity vaned diffuser increased the efficiency of the compressor. Highest isentropic efficiency increment obtained was 3\% of the design isentropic efficiency of the original geometry. It was noticed in the numerical results that the pinch made to the hub and the shroud wall was most beneficial to the operation of the compressor. Also the pinch made to the hub was better than the pinchmade to the shroud. The pinch did not affect the operation range of the compressor, but the low solidity vaned diffuser slightly decreased the operation range.The unsteady phenomena in the vaneless diffuser were studied experimentally andnumerically. The unsteady static pressure was measured at the diffuser inlet and outlet, and time-accurate numerical simulation was conducted. The unsteady static pressure showed that most of the pressure variations lay at the passing frequency of every second blade. The pressure variations did not vanish in the diffuser and were visible at the diffuser outlet. However, the amplitude of the pressure variations decreased in the diffuser. The time-accurate calculations showed quite a good agreement with the measured data. Agreement was very good at the design operation point, even though the computational grid was not dense enough inthe volute and in the exit cone. The time-accurate calculation over-predicted the amplitude of the pressure variations at high flow.
Resumo:
This thesis work deals with a mathematical description of flow in polymeric pipe and in a specific peristaltic pump. This study involves fluid-structure interaction analysis in presence of complex-turbulent flows treated in an arbitrary Lagrangian-Eulerian (ALE) framework. The flow simulations are performed in COMSOL 4.4, as 2D axial symmetric model, and ABAQUS 6.14.1, as 3D model with symmetric boundary conditions. In COMSOL, the fluid and structure problems are coupled by monolithic algorithm, while ABAQUS code links ABAQUS CFD and ABAQUS Standard solvers with single block-iterative partitioned algorithm. For the turbulent features of the flow, the fluid model in both codes is described by RNG k-ϵ. The structural model is described, on the basis of the pipe material, by Elastic models or Hyperelastic Neo-Hookean models with Rayleigh damping properties. In order to describe the pulsatile fluid flow after the pumping process, the available data are often defective for the fluid problem. Engineering measurements are normally able to provide average pressure or velocity at a cross-section. This problem has been analyzed by McDonald's and Womersley's work for average pressure at fixed cross section by Fourier analysis since '50, while nowadays sophisticated techniques including Finite Elements and Finite Volumes exist to study the flow. Finally, we set up peristaltic pipe simulations in ABAQUS code, by using the same model previously tested for the fl uid and the structure.
Resumo:
Abstract
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
The purpose of this thesis is to study factors that explain the bilateral fiber trade flows. This is done by analyzing bilateral trade flows during 1990-2006. It will be studied also, whether there are differences between fiber types. This thesis uses a gravity model approach to study the trade flows. Gravity model is mostly used to study the aggregate data between trading countries. In this thesis the gravity model is applied to single fibers. This model is then applied to panel data set. Results from the regression show clearly that there are benefits in studying different fibers in separate. The effects differ considerably from each other. Furthermore, this thesis speaks for the existence of Linder’s effect in certain fiber types.
Resumo:
The purpose of this Thesis was to study what is the present situation of Business Intelligence of the company unit. This means how efficiently unit uses possibilities of modern information management systems. The aim was to resolve how operative informa-tion management of unit’s tender process could be improved by modern information technology applications. This makes it possible that tender processes could be faster and more efficiency. At the beginning it was essential to acquaint oneself with written literature of Business Intelligence. Based on Business Intelligence theory is was relatively easy but challenging to search and discern how tender business could be improved by methods of Busi-ness Intelligence. The empirical phase of this study was executed as qualitative research method. This phase includes theme and natural interviews on the company. Problems and challenges of tender process were clarified in a part an empirical phase. Group of challenges were founded when studying information management of company unit. Based on theory and interviews, group of improvements were listed which company could possible do in the future when developing its operative processes.
Resumo:
The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.
Resumo:
Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.
Resumo:
The Standard Model of particle physics is currently the best description of fundamental particles and their interactions. All particles save the Higgs boson have been observed in particle accelerator experiments over the years. Despite the predictive power the Standard Model there are many phenomena that the scenario does not predict or explain. Among the most prominent dilemmas is matter-antimatter asymmetry, and much effort has been made in formulating scenarios that accurately predict the correct amount of matter-antimatter asymmetry in the universe. One of the most appealing explanations is baryogenesis via leptogenesis which not only serves as a mechanism of producing excess matter over antimatter but can also explain why neutrinos have very small non-zero masses. Interesting leptogenesis scenarios arise when other possible candidates of theories beyond the Standard Model are brought into the picture. In this thesis, we have studied leptogenesis in an extra dimensional framework and in a modified version of supersymmetric Standard Model. The first chapters of this thesis introduce the standard cosmological model, observations made on the photon to baryon ratio and necessary preconditions for successful baryogenesis. Baryogenesis via leptogenesis is then introduced and its connection to neutrino physics is illuminated. The final chapters concentrate on extra dimensional theories and supersymmetric models and their ability to accommodate leptogenesis. There, the results of our research are also presented.
Resumo:
Increased emissions of greenhouse gases into the atmosphere are causing an anthropogenic climate change. The resulting global warming challenges the ability of organisms to adapt to the new temperature conditions. However, warming is not the only major threat. In marine environments, dissolution of carbon dioxide from the atmosphere causes a decrease in surface water pH, the so called ocean acidification. The temperature and acidification effects can interact, and create even larger problems for the marine flora and fauna than either of the effects would cause alone. I have used Baltic calanoid copepods (crustacean zooplankton) as my research object and studied their growth and stress responses using climate predictions projected for the next century. I have studied both direct temperature and pH effects on copepods, and indirect effects via their food: the changing phytoplankton spring bloom composition and toxic cyanobacterium. The main aims of my thesis were: 1) to find out how warming and acidification combined with a toxic cyanobacterium affect copepod reproductive success (egg production, egg viability, egg hatching success, offspring development) and oxidative balance (antioxidant capacity, oxidative damage), and 2) to reveal the possible food quality effects of spring phytoplankton bloom composition dominated by diatoms or dinoflagellates on reproducing copepods (egg production, egg hatching, RNA:DNA ratio). The two copepod genera used, Acartia sp. and Eurytemora affinis are the dominating mesozooplankton taxa (0.2 – 2 mm) in my study area the Gulf of Finland. The 20°C temperature seems to be within the tolerance limits of Acartia spp., because copepods can adapt to the temperature phenotypically by adjusting their body size. Copepods are also able to tolerate a pH decrease of 0.4 from present values, but the combination of warm water and decreased pH causes problems for them. In my studies, the copepod oxidative balance was negatively influenced by the interaction of these two environmental factors, and egg and nauplii production were lower at 20°C and lower pH, than at 20°C and ambient pH. However, presence of toxic cyanobacterium Nodularia spumigena improved the copepod oxidative balance and helped to resist the environmental stress, in question. In addition, adaptive maternal effects seem to be an important adaptation mechanism in a changing environment, but it depends on the condition of the female copepod and her diet how much she can invest in her offspring. I did not find systematic food quality difference between diatoms and dinoflagellates. There are both good and bad diatom and dinoflagellate species. Instead, the dominating species in the phytoplankton bloom composition has a central role in determining the food quality, although copepods aim at obtaining as a balanced diet as possible by foraging on several species. If the dominating species is of poor quality it can cause stress when ingested, or lead to non-optimal foraging if rejected. My thesis demonstrates that climate change induced water temperature and pH changes can cause problems to Baltic Sea copepod communities. However, their resilience depends substantially on their diet, and therefore the response of phytoplankton to the environmental changes. As copepods are an important link in pelagic food webs, their future success can have far reaching consequences, for example on fish stocks.