953 resultados para What-if Analysis
Resumo:
Open innovation is becoming increasingly popular in academic literature and in business life, but even if people heard about it, they might not understand what it really is, they may over-estimate it thinking that it is savior or underestimate it, concentrating on limitations and risks. Current work sheds light on most important concepts of open innovation theory. Goal of current research is to offer business processes improvement for both inbound and outbound modes in case company. It is relevant as open innovation proved to affect firms‘ performance in general case and in case company, and Nokia planned to develop open innovation implementation since 2008 but still competitors succeed in it more, therefore analysis of current situation with open innovation in Nokia and recommendations how to improve it are topical. Case study method was used to answer the question ―How open innovation processes can be improved?‖. 11 in-depth interviews with Nokia senior managers and independent consultants were used to reach the goal of the thesis, as well as secondary sources. Results of current work are as-is and to-be models (process models of today and best practices models) of several open innovation modes, and recommendation for case company, which will be presented to company representatives and checked for practical applicability.
Resumo:
The thesis consists of four studies (articles I–IV) and a comprehensive summary. The aim is to deepen understanding and knowledge of newly qualified teachers’ experiences of their induction practices. The research interest thus reflects the ambition to strengthen the research-based platform for support measures. The aim can be specified in the following four sub-areas: to scrutinise NQTs’ experiences of the profession in the transition from education to work (study I), to describe and analyse NQTs’ experiences of their first encounters with school and classroom (study II), to explore NQTs’ experiences of their relationships within the school community (study III), to view NQTs’ experiences of support through peer-group mentoring as part of the wider aim of collaboration and assessment (study IV). The overall theoretical perspective constitutes teachers’ professional development. Induction forms an essential part of this continuum and can primarily be seen as a socialisation process into the profession and the social working environment of schools, as a unique phase of teachers’ development contributing to certain experiences, and as a formal programme designed to support new teachers. These lines of research are initiated in the separate studies (I–IV) and deepened in the theoretical part of the comprehensive summary. In order to appropriately understand induction as a specific practice the lines of research are in the end united and discussed with help of practice theory. More precisely the theory of practice architectures, including semantic space, physical space-time and social space, are used. The methodological approach to integrating the four studies is above all represented by abduction and meta-synthesis. Data has been collected through a questionnaire survey, with mainly open-ended questions, and altogether ten focus group meetings with newly qualified primary school teachers in 2007–2008. The teachers (n=88 in questionnaire, n=17 in focus groups), had between one and three years of teaching experience. Qualitative content analysis and narrative analysis were used when analysing the data. What is then the collected picture of induction or the first years in the profession if scrutinising the results presented in the articles? Four dimensions seem especially to permeate the studies and emerge when they are put together. The first dimension, the relational ˗ emotional, captures the social nature of induction and teacher’s work and the emotional character intimately intertwined. The second dimension, the tensional ˗ mutable, illustrates the intense pace of induction, together with the diffuse and unclear character of a teacher’s job. The third dimension, the instructive ˗ developmental, depicts induction as a unique and intensive phase of learning, maturity and professional development. Finally, the fourth dimension, the reciprocal ˗ professional, stresses the importance of reciprocity and collaboration in induction, both formally and informally. The outlined four dimensions, or integration of results, describing induction from the experiences of new teachers, constitute part of a new synthesis, induction practice. This synthesis was generated from viewing the integrated results through the theoretical lens of practice architecture and the three spaces, semantic space, physical space-time and social space. In this way, a more comprehensive, refined and partially new architecture of teachers’ induction practices are presented and discussed.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Tutkimuksen tavoitteena oli kartoittaa alueellisen jäteyhtiön Kymenlaakson Jäte Oy:n mahdollisuuksia rakeistaa ja termisesti kuivata mekaanisesti kuivattua mädätysjäännöstä sekä mahdollisuuksia toimittaa termisesti kuivattua materiaalia energiahyötykäyttöön. Tutkimuksessa selvitettiin myös kokemuksia lattialämmityksen käyttämisestä mädätysjäännöksen kuivaukseen. Tutkimuksessa perehdyttiin erilaisiin rakeistus- ja kuivausmenetelmiin sekä termisen kuivurin valintaan vaikuttaviin asioihin. Kuvaukset perustuvat kirjallisuudesta ja internetistä saatuihin tietoihin. Tekniikkakuvausten pohjalta lähdettiin kyselemään tarjouksia termisiä kuivauslaitteistoja myyviltä yrityksiltä. Tarjoukset pyydettiin kuiva-ainepitoisuuden muutokselle 30 %:sta 90 %:iin ja oletettiin, että kuivaukseen on käytettävissä lämpöä viideltä kaatopaikkakaasua käyttävältä mikroturbiinilta. Tutkimuksen aikana saatiin tarjous kuudelta yritykseltä. Saadut tarjoukset esiteltiin tiivistetysti raportissa ja kokonaisuudessaan ne sisällytettiin Kymenlaakson Jäte Oy:n laajempaan raporttiin, joka ei ole julkinen. Yritykset antoivat hyvin erilaisia tietoja siitä, mitä tarjoukseen sisältyy, joten tarjoukset eivät olleet suoraan vertailukelpoisia. Tarjouksista myös havaittiin, että jos Mäkikylän biokaasulaitokselta vastaanotettaisiin enimmäismäärä (19 500 t/a) mädätysjäännöstä, mikroturbiineilta saatava lämpömäärä ei riittäisi kuivaamaan kaikkea mädätysjäännöstä 90 % kuiva-ainepitoisuuteen. Tutkimuksen aikana huomattiin myös, että sitovan tarjouksen saamiseksi mädätysjäännös tulee toimittaa testattavaksi, jolloin saadaan vahvistus kuivausmenetelmän soveltuvuudesta kyseiselle materiaalille. Tutkimuksessa selvitettiin myös, minkälaisia kokemuksia löytyy lattialämmityksen käyttämisestä kuivaukseen niin Suomesta kuin maailmalta ja voiko menetelmää käyttää mädätysjäännöksen kuivaukseen. Kyseistä menetelmää on käytetty tehostamaan aurinkokuivausta, joten tutkimuksen aikana perehdyttiin erityisesti aurinkokuivaukseen liittyviin tieteellisiin artikkeleihin. Lattialämmityksen käytöstä löytyi niin heikkouksia kuin vahvuuksia. Suomessa aurinkokuivauksen ja lattialämmityksen yhdistelmä ei ole kuitenkaan päätynyt laajaan käyttöön ja syynä voidaan nähdä muun muassa kylmät ja pimeät vuodenajat sekä suuri pinta-alan tarve. Tutkimusraportissa selvitettiin lisäksi polttolaitosten edustajien kiinnostusta ja rajoituksia ottaa vastaan termisesti kuivattua mädätysjäännöstä. Tutkimuksen aikana otettiin yhteyttä alle 100 km etäisyydellä Kymenlaakson Jäte Oy:stä sijaitsevien jätteenpolttoluvan omaavien yritysten edustajiin. Saatuja vastauksia käsiteltiin tiivistetysti raportissa ja vastaukset sisällytettiin kokonaisuudessaan Kymenlaakson Jäte Oy:n laajempaan raporttiin, joka ei ole julkinen. Puhelinhaastattelujen pohjalta nähtiin, että yrityksillä on kiinnostusta materiaalia kohtaan, mutta samalla vastauksiin vaikuttavat mädätysjäännöksen analyysitulokset. Poltto-ominaisuuksiin liittyvät analyysit tullaan toteuttamaan vuoden 2012 aikana. Laitoksilla oli myös vaihtelevia rajoituksia materiaalia kohtaan, mutta analyysituloksista riippuen materiaalia voidaan hyödyntää energiana tuhansia tai jopa kymmeniä tuhansia tonneja vuodessa alle 100 km etäisyydellä Kymenlaakson Jäte Oy:stä.
Resumo:
Transportation of fluids is one of the most common and energy intensive processes in the industrial and HVAC sectors. Pumping systems are frequently subject to engineering malpractice when dimensioned, which can lead to poor operational efficiency. Moreover, pump monitoring requires dedicated measuring equipment, which imply costly investments. Inefficient pump operation and improper maintenance can increase energy costs substantially and even lead to pump failure. A centrifugal pump is commonly driven by an induction motor. Driving the induction motor with a frequency converter can diminish energy consumption in pump drives and provide better control of a process. In addition, induction machine signals can also be estimated by modern frequency converters, dispensing with the use of sensors. If the estimates are accurate enough, a pump can be modelled and integrated into the frequency converter control scheme. This can open the possibility of joint motor and pump monitoring and diagnostics, thereby allowing the detection of reliability-reducing operating states that can lead to additional maintenance costs. The goal of this work is to study the accuracy of rotational speed, torque and shaft power estimates calculated by a frequency converter. Laboratory tests were performed in order to observe estimate behaviour in both steady-state and transient operation. An induction machine driven by a vector-controlled frequency converter, coupled with another induction machine acting as load was used in the tests. The estimated quantities were obtained through the frequency converter’s Trend Recorder software. A high-precision, HBM T12 torque-speed transducer was used to measure the actual values of the aforementioned variables. The effect of the flux optimization energy saving feature on the estimate quality was also studied. A processing function was developed in MATLAB for comparison of the obtained data. The obtained results confirm the suitability of this particular converter to provide accurate enough estimates for pumping applications.
Resumo:
The theoretical research of the study concentrated on finding theoretical frameworks to optimize the amount of needed stock keeping units (SKUs) in manufacturing industry. The goal was to find ways for a company to acquire an optimal collection of stock keeping units needed for manufacturing needed amount of end products. The research follows constructive research approach leaning towards practical problem solving. In the empirical part of this study, a recipe search tool was developed to an existing database used in the target company. The purpose of the tools was to find all the recipes meeting the EUPS performance standard and put the recipes in a ranking order using the data available in the database. The ranking of the recipes was formed from the combination of the performance measures and price of the recipes. In addition, the tool researched what kind of paper SKUs were needed to manufacture the best performing recipes. The tool developed during this process meets the requirements. It eases and makes it much faster to search for all the recipes meeting the EUPS standard. Furthermore, many future development possibilities for the tool were discovered while writing the thesis.
Resumo:
Rapid changes in biodiversity are occurring globally, as a consequence of anthropogenic disturbance. This has raised concerns, since biodiversity is known to significantly contribute to ecosystem functions and services. Marine benthic communities participate in numerous functions provided by soft-sedimentary ecosystems. Eutrophication-induced oxygen deficiency is a growing threat against infaunal communities, both in open sea areas and in coastal zones. There is thus a need to understand how such disturbance affects benthic communities, and what is lost in terms of ecosystem functioning if benthic communities are harmed. In this thesis, the status of benthic biodiversity was assessed for the open Baltic Sea, a system severely affected by broad-scale hypoxia. Long-term monitoring data made it possible to establish quantitative biodiversity baselines against which change could be compared. The findings show that benthic biodiversity is currently severely impaired in large areas of the open Baltic Sea, from the Bornholm Basin to the Gulf of Finland. The observed reduction in biodiversity indicates that benthic communities are structurally and functionally impoverished in several of the sub-basins due to the hypoxic stress. A more detailed examination of disturbance impacts (through field studies and -experiments) on benthic communities in coastal areas showed that changes in benthic community structure and function took place well before species were lost from the system. The degradation of benthic community structure and function was directed by the type of disturbance, and its specific temporal and spatial characteristics. The observed shifts in benthic trait composition were primarily the result of reductions in species’ abundances, or of changes in demographic characteristics, such as the loss of large, adult bivalves. Reduction in community functions was expressed as declines in the benthic bioturbation potential and in secondary biomass production. The benthic communities and their degradation accounted for a substantial proportion of the changes observed in ecosystem multifunctionality. Individual ecosystem functions (i.e. measures of sediment ecosystem metabolism, elemental cycling, biomass production, organic matter transformation and physical structuring) were observed to differ in their response to increasing hypoxic disturbance. Interestingly, the results suggested that an impairment of ecosystem functioning could be detected at an earlier stage if multiple functions were considered. Importantly, the findings indicate that even small-scale hypoxic disturbance can reduce the buffering capacity of sedimentary ecosystem, and increase the susceptibility of the system towards further stress. Although the results of the individual papers are context-dependent, their combined outcome implies that healthy benthic communities are important for sustaining overall ecosystem functioning as well as ecosystem resilience in the Baltic Sea.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
This paper investigates defect detection methodologies for rolling element bearings through vibration analysis. Specifically, the utility of a new signal processing scheme combining the High Frequency Resonance Technique (HFRT) and Adaptive Line Enhancer (ALE) is investigated. The accelerometer is used to acquire data for this analysis, and experimental results have been obtained for outer race defects. Results show the potential effectiveness of the signal processing technique to determine both the severity and location of a defect. The HFRT utilizes the fact that much of the energy resulting from a defect impact manifests itself in the higher resonant frequencies of a system. Demodulation of these frequency bands through use of the envelope technique is then employed to gain further insight into the nature of the defect while further increasing the signal to noise ratio. If periodic, the defect frequency is then present in the spectra of the enveloped signal. The ALE is used to enhance the envelope spectrum by reducing the broadband noise. It provides an enhanced envelope spectrum with clear peaks at the harmonics of a characteristic defect frequency. It is implemented by using a delayed version of the signal and the signal itself to decorrelate the wideband noise. This noise is then rejected by the adaptive filter that is based upon the periodic information in the signal. Results have been obtained for outer race defects. They show the effectiveness of the methodology to determine both the severity and location of a defect. In two instances, a linear relationship between signal characteristics and defect size is indicated.
Resumo:
Chaotic behaviour is one of the hardest problems that can happen in nonlinear dynamical systems with severe nonlinearities. It makes the system's responses unpredictable. It makes the system's responses to behave similar to noise. In some applications it should be avoided. One of the approaches to detect the chaotic behaviour is nding the Lyapunov exponent through examining the dynamical equation of the system. It needs a model of the system. The goal of this study is the diagnosis of chaotic behaviour by just exploring the data (signal) without using any dynamical model of the system. In this work two methods are tested on the time series data collected from AMB (Active Magnetic Bearing) system sensors. The rst method is used to nd the largest Lyapunov exponent by Rosenstein method. The second method is a 0-1 test for identifying chaotic behaviour. These two methods are used to detect if the data is chaotic. By using Rosenstein method it is needed to nd the minimum embedding dimension. To nd the minimum embedding dimension Cao method is used. Cao method does not give just the minimum embedding dimension, it also gives the order of the nonlinear dynamical equation of the system and also it shows how the system's signals are corrupted with noise. At the end of this research a test called runs test is introduced to show that the data is not excessively noisy.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Objective of this master’s thesis is to create an investment calculation model, which makes it possible to determine if the ski resort business can be profitable. The ultimate goal is to create a description with the help of theoretical knowledge, interviews and investment calculation model, how the operation of ski resort is possible to be profitable and what are the critical success factors for achieving this goal. Thesis is carried out as qualitative research, which is supported by the necessary constructive information utilizing calculations. The client company has provided valuable insights and material for this thesis. Theoretical report examines the steps of developing a business plan, investment components and methods as well as sensitivity analysis. The theoretical part is based on the articles, textbooks, interviews and researches. The empirical part of the thesis is assembled by benchmarking other same size Finnish ski resorts, conducting interviews and using investment calculation model. The empirical part provides comprehensive information about ski resort industry, the future of the project, the business plan and the profitability calculations. As the result of this thesis the investment calculation model, which makes it possible to simulate different scenarios for ski resort project, was formed. The model was used to create a picture in which kind of scenario the ski resort business would be profitable and what are the critical success factors in achieving this aim.
Resumo:
Maritime safety is an issue that has gained a lot of attention in the Baltic Sea area due to the dense maritime traffic and transportation of oil in the area. Lots of effort has been paid to enhance maritime safety in the area. The risk exists that excessive legislation and other requirements mean more costs for limited benefit. In order to utilize both public and private resources efficiently, awareness is required of what kind of costs maritime safety policy instruments cause and whether the costs are in relation to benefits. The aim of this report is to present an overview of the cost-effectiveness of maritime safety policy instruments focusing on the cost aspect: what kind of costs maritime safety policy causes, to whom, what affects the cost-effectiveness and how cost-effectiveness is studied. The study is based on a literature review and on the interviews of Finnish maritime experts. The results of this study imply that cost-effectiveness is a complicated issue to evaluate. There are no uniform practices for which costs and benefits should be included in the evaluation and how they should be valued. One of the challenges is how to measure costs and benefits during the course of a longer time period. Often a lack of data erodes the reliability of evaluation. In the prevention of maritime accidents, costs typically include investments in ship structures or equipment, as well as maintenance and labor costs. Also large investments may be justifiable if they respectively provide significant improvements to maritime safety. Measures are cost-effective only if they are implemented properly. Costeffectiveness is decreased if a measure causes overlapping or repetitious work. Costeffectiveness is also decreased if the technology isn’t user-friendly or if it is soon replaced with a new technology or another new appliance. In future studies on the cost-effectiveness of maritime safety policy, it is important to acknowledge the dependency between different policy instruments and the uncertainty of the factors affecting cost-effectiveness. The costs of a single measure are rarely relatively significant and the effect of each measure on safety tends to be positive. The challenge is to rank the measures and to find the most effective combination of different policy instruments. The greatest potential offered for the analysis of cost-effectiveness of individual measures is their implementation in clearly defined risk situations, in which different measures are truly alternative to each other. Overall, maritime safety measures do not seem to be considered burdening for the shipping industry in Finland at the moment. Generally actors in the Finnish shipping industry seem to find maintaining a high safety level important and act accordingly.