888 resultados para data-driven simulation
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Granular flow phenomena are frequently encountered in the design of process and industrial plants in the traditional fields of the chemical, nuclear and oil industries as well as in other activities such as food and materials handling. Multi-phase flow is one important branch of the granular flow. Granular materials have unusual kinds of behavior compared to normal materials, either solids or fluids. Although some of the characteristics are still not well-known yet, one thing is confirmed: the particle-particle interaction plays a key role in the dynamics of granular materials, especially for dense granular materials. At the beginning of this thesis, detailed illustration of developing two models for describing the interaction based on the results of finite-element simulation, dimension analysis and numerical simulation is presented. The first model is used to describing the normal collision of viscoelastic particles. Based on some existent models, more parameters are added to this model, which make the model predict the experimental results more accurately. The second model is used for oblique collision, which include the effects from tangential velocity, angular velocity and surface friction based on Coulomb's law. The theoretical predictions of this model are in agreement with those by finite-element simulation. I n the latter chapters of this thesis, the models are used to predict industrial granular flow and the agreement between the simulations and experiments also shows the validation of the new model. The first case presents the simulation of granular flow passing over a circular obstacle. The simulations successfully predict the existence of a parabolic steady layer and show how the characteristics of the particles, such as coefficients of restitution and surface friction affect the separation results. The second case is a spinning container filled with granular material. Employing the previous models, the simulation could also reproduce experimentally observed phenomena, such as a depression in the center of a high frequency rotation. The third application is about gas-solid mixed flow in a vertically vibrated device. Gas phase motion is added to coherence with the particle motion. The governing equations of the gas phase are solved by using the Large eddy simulation (LES) and particle motion is predicted by using the Lagrangian method. The simulation predicted some pattern formation reported by experiment.
Resumo:
The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Diplomityö tehtiin kansainväliseen, mekaanisen puunjalostusteollisuuden koneita, tuotantojärjestelmiä ja tehtaita toimittavaan yritykseen. Diplomityön tarkoituksena oli kartoittaa syitä viilusorvin teräpenkin asemoinnissa esiintyneisiin värähtelyongelmiin ja tutkia ratkaisuja niiden voittamiseksi, sekä sorvausvoimien määrittäminen. Diplomityön teoreettisessa osassa tutustuttiin viilusorvin, erityisesti sen hydraulisten servojärjestelmien rakenteeseen ja toimintaan sekä viilunsorvaukseen. Teräpenkin syötön servojärjestelmää tutkittiin teoreettisesti johtamalla suljetun piirin siirtofunktiot “asema/käsky” ja “virhe/voima” ja tulostamalla niiden taajuusvaste-kuvaajat, joista tutkittiin parametrien vaikutuksia järjestelmän toimintaan. Tulokset vahvistettiin simuloimalla. Todettiin nykyisen järjestelmän värähtelyongelmien johtuvan pääasiassa hydrauliöljyn joustosta sylinterissä. Parannuksina ehdotettiin suurempaa sylinterin halkaisijaa ja viskoosikitkakertoimen suurentamista. Diplomityön kokeellisessa osassa mitattiin viilusorvin servojärjestelmien toimilaitteissa esiintyviä voimia ja niiden perusteella laskettiin varsinaiset sorvausvoimat. Lisäksi tutkittiin teräpenkin syötön ja muiden servojärjestelmien asemointitarkkuutta sorvauksen aikana. Mittauksia varten diplomityössä suunniteltiin ja hankittiin kannettava mittausjärjestelmä.
Resumo:
Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.
Resumo:
BACKGROUND: Little is known about the long-term changes in the functioning of schizophrenia patients receiving maintenance therapy with olanzapine long-acting injection (LAI), and whether observed changes differ from those seen with oral olanzapine. METHODS: This study describes changes in the levels of functioning among outpatients with schizophrenia treated with olanzapine-LAI compared with oral olanzapine over 2 years. This was a secondary analysis of data from a multicenter, randomized, open-label, 2-year study comparing the long-term treatment effectiveness of monthly olanzapine-LAI (405 mg/4 weeks; n=264) with daily oral olanzapine (10 mg/day; n=260). Levels of functioning were assessed with the Heinrichs-Carpenter Quality of Life Scale. Functional status was also classified as 'good', 'moderate', or 'poor', using a previous data-driven approach. Changes in functional levels were assessed with McNemar's test and comparisons between olanzapine-LAI and oral olanzapine employed the Student's t-test. RESULTS: Over the 2-year study, the patients treated with olanzapine-LAI improved their level of functioning (per Quality of Life total score) from 64.0-70.8 (P<0.001). Patients on oral olanzapine also increased their level of functioning from 62.1-70.1 (P<0.001). At baseline, 19.2% of the olanzapine-LAI-treated patients had a 'good' level of functioning, which increased to 27.5% (P<0.05). The figures for oral olanzapine were 14.2% and 24.5%, respectively (P<0.001). Results did not significantly differ between olanzapine-LAI and oral olanzapine. CONCLUSION: In this 2-year, open-label, randomized study of olanzapine-LAI, outpatients with schizophrenia maintained or improved their favorable baseline level of functioning over time. Results did not significantly differ between olanzapine-LAI and oral olanzapine.
Resumo:
Työn tarkoituksena oli tutkia kompleksoituvien metallien erotusta kloridiliuoksesta ioninvaihdolla. Kirjallisessa osassa perehdyttiin metallikompleksien muodostumiseen, ja erityisesti hopean, kalsiumin, magnesiumin, lyijyn ja sinkin muodostamiin komplekseihin kloridin ja nitraatin kanssa. Kirjallisessa osassa käsiteltiin myös metallien erottamista kiintopetikolonneissa jatkuvatoimisilla ioninvaihtomenetelmillä. Tässä työssä jatkuvatoimisen ioninvaihdon prosessivaihtoehdot jaoteltiin pyöriviin ja paikallaan pysyviin kolonneihin, sekä tarkasteltiin eri prosessivaihtoehtoja kolonnien kytkentöjen suhteen. Työn kokeellisessa osassa tutkittiin kahdenarvoisten metallien erottamista yhdenarvoisista metalleista sekä luotiin koedataa vastaavanlaisen erotusprosessin simulointiin. Kokeissa käytettiin anioninvaihtohartsia ja kelatoivaa selektiivistä ioninvaihtohartsia. Kahdenarvoisen kalsiumin, magnesiumin, lyijyn ja sinkin adsorptiota hartseihin tutkittiin tasapaino-, kinetiikka- ja kolonnikokeilla. Anioninvaihtohartsilla tehtyjen tasapaino- ja kolonnikokeiden tulokset osoittivat, että hartsi adsorboi tehokkaasti sinkkiä kloridiliuoksista, koska sinkki muodostaa stabiileja anionisia klorokomplekseja. Muiden tutkittujen kahdenarvoisten metallien adsorptio hartsiin oli huomattavasti vähäisempää. Tulosten perusteella tutkittu anioninvaihtohartsi on hyvä vaihtoehto sinkin erottamiseen muista tutkituista kahdenarvoisista metalleista kloridiympäristössä. Kelatoivalla hartsilla tehdyt tasapaino- ja kolonnikokeet osoittivat, että hartsi adsorboi kloridiliuoksista hyvin kahdenarvoista kalsiumia, magnesiumia, lyijyä ja sinkkiä, mutta ei adsorboi yhdenarvoista hopeaa. Tulosten perusteella kahdenarvoisten metallien erottaminen yhdenarvoisista metalleista voidaan toteuttaa kokeissa käytetyllä kelatoivalla ioninvaihtohartsilla.
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
Few people see both opportunities and threats coming from IT legacy in current world. On one hand, effective legacy management can bring substantial hard savings and smooth transition to the desired future state. On the other hand, its mismanagement contributes to serious operational business risks, as old systems are not as reliable as it is required by the business users. This thesis offers one perspective of dealing with IT legacy – through effective contract management, as a component towards achieving Procurement Excellence in IT, thus bridging IT delivery departments, IT procurement, business units, and suppliers. It developed a model for assessing the impact of improvements on contract management process and set of tools and advices with regards to analysis and improvement actions. The thesis conducted case study to present and justify the implementation of Lean Six Sigma in IT legacy contract management environment. Lean Six Sigma proved to be successful and this thesis presents and discusses all the steps necessary, and pitfalls to avoid, to achieve breakthrough improvement in IT contract management process performance. For the IT legacy contract management process two improvements require special attention and can be easily copied to any organization. First is the issue of diluted contract ownership that stops all the improvements, as people do not know who is responsible for performing those actions. Second is the contract management performance evaluation tool, which can be used for monitoring, identifying outlying contracts and opportunities for improvements in the process. The study resulted in a valuable insight on the benefits of applying Lean Six Sigma to improve IT legacy contract management, as well as on how Lean Six Sigma can be applied in IT environment. Managerial implications are discussed. It is concluded that the use of data-driven Lean Six Sigma methodology for improving the existing IT contract management processes is a significant addition to the existing best practices in contract management.
Resumo:
Vaikka liiketoimintatiedon hallintaa sekä johdon päätöksentekoa on tutkittu laajasti, näiden kahden käsitteen yhteisvaikutuksesta on olemassa hyvin rajallinen määrä tutkimustietoa. Tulevaisuudessa aiheen tärkeys korostuu, sillä olemassa olevan datan määrä kasvaa jatkuvasti. Yritykset tarvitsevat jatkossa yhä enemmän kyvykkyyksiä sekä resursseja, jotta sekä strukturoitua että strukturoimatonta tietoa voidaan hyödyntää lähteestä riippumatta. Nykyiset Business Intelligence -ratkaisut mahdollistavat tehokkaan liiketoimintatiedon hallinnan osana johdon päätöksentekoa. Aiemman kirjallisuuden pohjalta, tutkimuksen empiirinen osuus tunnistaa liiketoimintatiedon hyödyntämiseen liittyviä tekijöitä, jotka joko tukevat tai rajoittavat johdon päätöksentekoprosessia. Tutkimuksen teoreettinen osuus johdattaa lukijan tutkimusaiheeseen kirjallisuuskatsauksen avulla. Keskeisimmät tutkimukseen liittyvät käsitteet, kuten Business Intelligence ja johdon päätöksenteko, esitetään relevantin kirjallisuuden avulla – tämän lisäksi myös dataan liittyvät käsitteet analysoidaan tarkasti. Tutkimuksen empiirinen osuus rakentuu tutkimusteorian pohjalta. Tutkimuksen empiirisessä osuudessa paneudutaan tutkimusteemoihin käytännön esimerkein: kolmen tapaustutkimuksen avulla tutkitaan sekä kuvataan toisistaan irrallisia tapauksia. Jokainen tapaus kuvataan sekä analysoidaan teoriaan perustuvien väitteiden avulla – nämä väitteet ovat perusedellytyksiä menestyksekkäälle liiketoimintatiedon hyödyntämiseen perustuvalle päätöksenteolle. Tapaustutkimusten avulla alkuperäistä tutkimusongelmaa voidaan analysoida tarkasti huomioiden jo olemassa oleva tutkimustieto. Analyysin tulosten avulla myös yksittäisiä rajoitteita sekä mahdollistavia tekijöitä voidaan analysoida. Tulokset osoittavat, että rajoitteilla on vahvasti negatiivinen vaikutus päätöksentekoprosessin onnistumiseen. Toisaalta yritysjohto on tietoinen liiketoimintatiedon hallintaan liittyvistä positiivisista seurauksista, vaikka kaikkia mahdollisuuksia ei olisikaan hyödynnetty. Tutkimuksen merkittävin tulos esittelee viitekehyksen, jonka puitteissa johdon päätöksentekoprosesseja voidaan arvioida sekä analysoida. Despite the fact that the literature on Business Intelligence and managerial decision-making is extensive, relatively little effort has been made to research the relationship between them. This particular field of study has become important since the amount of data in the world is growing every second. Companies require capabilities and resources in order to utilize structured data and unstructured data from internal and external data sources. However, the present Business Intelligence technologies enable managers to utilize data effectively in decision-making. Based on the prior literature, the empirical part of the thesis identifies the enablers and constraints in computer-aided managerial decision-making process. In this thesis, the theoretical part provides a preliminary understanding about the research area through a literature review. The key concepts such as Business Intelligence and managerial decision-making are explored by reviewing the relevant literature. Additionally, different data sources as well as data forms are analyzed in further detail. All key concepts are taken into account when the empirical part is carried out. The empirical part obtains an understanding of the real world situation when it comes to the themes that were covered in the theoretical part. Three selected case companies are analyzed through those statements, which are considered as critical prerequisites for successful computer-aided managerial decision-making. The case study analysis, which is a part of the empirical part, enables the researcher to examine the relationship between Business Intelligence and managerial decision-making. Based on the findings of the case study analysis, the researcher identifies the enablers and constraints through the case study interviews. The findings indicate that the constraints have a highly negative influence on the decision-making process. In addition, the managers are aware of the positive implications that Business Intelligence has for decision-making, but all possibilities are not yet utilized. As a main result of this study, a data-driven framework for managerial decision-making is introduced. This framework can be used when the managerial decision-making processes are evaluated and analyzed.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Tämän pro gradu –tutkielman tarkoituksena on selvittää minkälaisella prosessilla saadaan määriteltyä resursoinnin näkökulmasta toteutettu osaamiskartoitus. Tutkimus on laadullinen tapaustutkimus kohdeorganisaatiossa. Tutkimusaineisto on kerätty dokumenteista ja tutkimuksessa toteutetuista tapaamisista sekä työpajoista. Tutkimusaineisto on analysoitu aineistolähtöisellä sisällönanalyysimenetelmällä. Tutkimuksen tulosten mukaan osaamiskartoitusprosessiin ja sen onnistumiseen vaikuttavat merkittävästi yrityksen strategia, johdon sitoutuminen osaamiskartoitustyöhön, nykytilan analyysi, yhteiset käsitteistöt, mittarit ja tavoitteet. Resursoinnin näkökulmasta vaadittavat osaamiset eivät välttämättä ole samat kuin kehittämisen näkökulmasta. Määrittelyprosessin onnistumisen kannalta merkittäviä tekijöitä ovat oikeiden henkilöiden osallistuminen prosessiin ja heidän halunsa jakaa tietoa.
Resumo:
Lappeenrannan teknillinen yliopisto tutkii älykkäiden sähköverkkojen kehittämistä. Yliopisto on hankkinut sähköverkkoonsa tuuliturbiinin ja aurinkopaneeleita, joilla pystytään tuottamaan sähköenergiaa sähköverkkoon. Näitä tuotantoja voidaan käyttää myös tutkimuksessa. Tässä työssä luodaan simulaatiomalli yliopiston sähköverkosta Matlab® Simulink® -ohjelmalla. Simulaatiomalliin mallinnetaan yliopiston sisäinen keskijänniteverkko ja osa pienjänniteverkosta. Simulaatiomalli toteutetaan ohjelman valmiilla komponenteilla, joihin lasketaan tarvittavat parametrit. Tuuliturbiinin ja aurinkopaneelien sähköntuotantotehot määritetään säätiladatojen avulla. Verkon komponenteille lasketaan arvot komponenttien tyyppitietojen perusteella ja asetetaan simulaatiomallin parametreiksi. Simulaatiomalli luodaan yliopiston sisäisen verkon tehonjaon tarkastelemiseksi. Työssä selvitetään myös mahdollisuuksia luodun simulaatiomallin käyttämiseen vikatilanteiden tarkastelussa.
Resumo:
As technology has developed it has increased the number of data produced and collected from business environment. Over 80% of that data includes some sort of reference to geographical location. Individuals have used that information by utilizing Google Maps or different GPS devices, however such information has remained unexploited in business. This thesis will study the use and utilization of geographically referenced data in capital-intensive business by first providing theoretical insight into how data and data-driven management enables and enhances the business and how especially geographically referenced data adds value to the company and then examining empirical case evidence how geographical information can truly be exploited in capital-intensive business and what are the value adding elements of geographical information to the business. The study contains semi-structured interviews that are used to scan attitudes and beliefs of an organization towards the geographic information and to discover fields of applications for the use of geographic information system within the case company. Additionally geographical data is tested in order to illustrate how the data could be used in practice. Finally the outcome of the thesis provides understanding from which elements the added value of geographical information in business is consisted of and how such data can be utilized in the case company and in capital-intensive business.