57 resultados para A Model for Costing Absenteeism in Hotels
Resumo:
Työn tavoitteena on suunnitella yritykselle uuden tuotantokustannusten laskentamallin toteutusperiaatteet. Nykyisessä tuotantokustannusten laskentamallissa tietojen luotettavuus ja totuudenmukainen kustannusten kohdistaminen tuotteille nähtiin haasteellisena. Uuden laskentamallin kehittäminen edellyttääkin nykyisen laskentamallin puutteiden selvittämisen. Uuden laskentamallin suunnittelu nähtiin yrityksessä erittäin tarpeellisena. Työssä on käytetty konstruktiivista tutkimusmenetelmää ja aineistona on hyödynnetty kustannuslaskennan kirjallisuuden, artikkeleiden ja opinnäytetöiden lisäksi yrityksen omaa materiaalia sekä henkilöstön haastatteluita. Työn tuloksena voidaan todeta yrityksen tarvitsevan uuden tuotantokustannusten laskentamallin nykyisen laskentamallin tilalle. Nykyisessä laskentamallissa käytettävät laskentaperiaatteet eivät ole hyvällä tasolla. Uudet laskennan toteutusperiaatteet tuovat selkeyttä ja ymmärrettävyyttä laskentaan. Näiden avulla pystytään tarkkailemaan tuotteiden tuotantokustannuksia tarkemmalla tasolla. Tämä auttaa toiminnan kehittämisessä ja parantaa tiedon käytettävyyttä myös päätöksentekotilanteissa. Uuden mallin periaatteiden toimivuus testataan alustavasti muutamilla tuotteilla. Työssä käsiteltävä malli on vain osa yrityksen tuotekustannuslaskentaa, joten tarkempien tuotekohtaisten kustannusten aikaansaamiseksi yrityksen tulisi tarkentaa koko tuotelaskentamallia.
Resumo:
This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Hydrogen stratification and atmosphere mixing is a very important phenomenon in nuclear reactor containments when severe accidents are studied and simulated. Hydrogen generation, distribution and accumulation in certain parts of containment may pose a great risk to pressure increase induced by hydrogen combustion, and thus, challenge the integrity of NPP containment. The accurate prediction of hydrogen distribution is important with respect to the safety design of a NPP. Modelling methods typically used for containment analyses include both lumped parameter and field codes. The lumped parameter method is universally used in the containment codes, because its versatility, flexibility and simplicity. The lumped parameter method allows fast, full-scale simulations, where different containment geometries with relevant engineering safety features can be modelled. Lumped parameter gas stratification and mixing modelling methods are presented and discussed in this master’s thesis. Experimental research is widely used in containment analyses. The HM-2 experiment related to hydrogen stratification and mixing conducted at the THAI facility in Germany is calculated with the APROS lump parameter containment package and the APROS 6-equation thermal hydraulic model. The main purpose was to study, whether the convection term included in the momentum conservation equation of the 6-equation modelling gives some remarkable advantages compared to the simplified lumped parameter approach. Finally, a simple containment test case (high steam release to a narrow steam generator room inside a large dry containment) was calculated with both APROS models. In this case, the aim was to determine the extreme containment conditions, where the effect of convection term was supposed to be possibly high. Calculation results showed that both the APROS containment and the 6-equation model could model the hydrogen stratification in the THAI test well, if the vertical nodalisation was dense enough. However, in more complicated cases, the numerical diffusion may distort the results. Calculation of light gas stratification could be probably improved by applying the second order discretisation scheme for the modelling of gas flows. If the gas flows are relatively high, the convection term of the momentum equation is necessary to model the pressure differences between the adjacent nodes reasonably.
Resumo:
Continuous loading and unloading can cause breakdown of cranes. In seeking solution to this problem, the use of an intelligent control system for improving the fatigue life of cranes in the control of mechatronics has been under study since 1994. This research focuses on the use of neural networks as possibilities of developing algorithm to map stresses on a crane. The intelligent algorithm was designed to be a part of the system of a crane, the design process started with solid works, ANSYS and co-simulation using MSc Adams software which was incorporated in MATLAB-Simulink and finally MATLAB neural network (NN) for the optimization process. The flexibility of the boom accounted for the accuracy of the maximum stress results in the ADAMS model. The flexibility created in ANSYS produced more accurate results compared to the flexibility model in ADAMS/View using discrete link. The compatibility between.ADAMS and ANSYS softwares was paramount in the efficiency and the accuracy of the results. Von Mises stresses analysis was more suitable for this thesis work because the hydraulic boom was made from construction steel FE-510 of steel grade S355 with yield strength of 355MPa. Von Mises theory was good for further analysis due to ductility of the material and the repeated tensile and shear loading. Neural network predictions for the maximum stresses were then compared with the co-simulation results for accuracy, and the comparison showed that the results obtained from neural network model were sufficiently accurate in predicting the maximum stresses on the boom than co-simulation.
Resumo:
Tutkimuksen tarkoituksena on tutkia globaalin konsernin yhden liiketoimintayksikön tuotekustannuslaskennan nykytilaa. Lisäksi tutkimuksessa selvitetään, miten tuotekohtaista kustannusseurantaa voidaan kehittää mallimoottoriajatuksen avulla. Tutkimus on toteutettu laadullisena case-tutkimuksena yhden organisaation tietojen pohjalta. Teoriaosuuden lähdeaineistot koostuvat pääosin kustannuslaskennan ja -johtamisen perusteoksista ja tieteellisistä artikkeleista. Empiriaosuuden tiedot pohjautuvat haastatteluihin, tietojärjestelmiin ja tutustumiseen organisaatioon. Tutkimuksessa selvisi, että liiketoimintayksikkö ei tällä hetkellä seuraa tuotekohtaisia kustannuksia yksittäisten tuotteiden tasolla. Kustannusseuranta tapahtuu sen sijaan suurempien kokonaisuuksien keskimääräisten kustannuksien tasolla. Tuotekustannuslaskenta on toteutettu perinteiseksi menetelmäksi luokiteltavalla laskentatavalla, jossa välilliset kustannukset kohdistetaan yleiskustannuslisäprosenttien avulla. Tutkimuksen perusteella yleiskustannuksien kohdistamisperusteissa on havaittavissa viitteitä kustannuksien vääristymisestä. Tuotetason kustannuksien seurantaan kehitettiin mallimoottoriajatukseen pohjautuva kustannusmalli, jonka avulla seurataan tarkasti valikoitujen tuotteiden kustannuksien kehittymistä sekä kustannusrakennetta. Mallin avulla voidaan lisätä tuotetason kustannustietoisuutta liiketoimintayksikössä sekä tehdä havaintoja tuotekohtaisten kustannuksien kehityssuunnasta. Mallin kustannustietona käytetään olemassa olevan kustannuslaskentajärjestelmän tietoja. Tästä johtuen mallin kustannustiedoissa on havaittavissa myös viitteitä kustannuksien vääristymisestä.
Resumo:
Työn tavoitteena oli rakentaa käyttökelpoinen elinkaarikustannusmalli, jonka avulla voidaan arvioida investoitavien uusien koneiden elinkaaren aikaisia kustannuksia. Elinkaarikustannusmalli on rakennettu Microsoft Excel -ohjelman avulla. Mallin tavoitteena oli saada tarkempaa tietoa koneiden aiheuttamista elinkaarikustannuksista. Aiemmin tällaista mallia ei ole ollut käytössä, vaan investointien paremmuutta on paljolti verrattu takaisinmaksuajan ja hankintahinnan perusteella. Työssä hyödynnetään investointi- ja elinkaarilaskennan teorioita elinkaarikustannusmallin rakentamiseen. Työn empiirisessä osiossa käydään läpi, miten elinkaarikustannusmalli on rakennettu ja millaisia ominaisuuksia malli sisältää. Empiirisessä osassa myös testataan rakennetun mallin toimivuutta ja lasketaan elinkaarikustannukset yhdelle koneinvestoinnille. Teoriaosuudessa käsiteltiin myös elinkaarituottoja mallin jatkokehityksen kannalta. Työn tuloksena saatiin rakennettua käyttökelpoinen elinkaarikustannusmalli, jonka ominaisuuksina ovat sen helppo käyttö ja yksinkertaisuus. Mallia voidaan käyttää useammalle koneinvestointivaihtoehdolle samanaikaisesti ja sitä voidaan helposti laajentaa tulevaisuudessa.
Resumo:
Diplomityön tavoitteena oli kehittää tuotteistettu palvelu toimintolaskentajärjestelmän käyttöönottoon. Käyttöönotolla tarkoitettiin tässä työssä laskennan käyttöönottoa, eli laskentamallin rakentamista toimintolaskentajärjestelmään. Työ tehtiin Solenovo Oy:n toimeksiannosta ja tutkimusmenetelmänä käytettiin toimintatutkimusta. Työ koostuu teoreettisesta viitekehyksestä ja empiirisestä osuudesta. Työn teoreettinen viitekehys muodostuu kolmesta eri aihepiiriä käsittelevästä luvusta. Ensimmäinen käsittelee toimintolaskentaa ja erityisesti sen vahvuuksia, heikkouksia, käyttöönottoa, sekä eroja ja yhtäläisyyksiä verrattuna perinteiseen kustannuslaskentaan. Seuraavana käsitellään palveluita ja niiden kehittämistä. Palvelun kehittämismenetelmistä käsitellään tähän työhön valittu menetelmä service blueprint, sen rakenne ja kehittäminen. Kolmas teoreettisen viitekehyksen osa-alue on palveluiden tuotteistaminen, mikä tarkentui tässä työssä erityisesti asiantuntijapalveluiden tuotteistamiseen. Työn päätuloksena saavutettiin työn tärkein tavoite, eli kehitettiin tuotteistettu palvelu toimintolaskentajärjestelmän käyttöönottoon. Palvelu kehitettiin service blueprint-menetelmää hyödyntäen ja tuotteistettiin soveltuvilta osin. Koska kyseessä oli täysin uusi palvelu, tuotteistaminen painottui tuotteistamisen suunnitteluun ja sisäiseen tuotteistamiseen. Työn aikana määriteltiin palvelun vaatimukset ja sisältö modulointia hyödyntäen, ohjeistus palvelun toteutusta varten, arvioitiin palvelun eri vaiheisiin sisältyviä riskejä ja haasteita, sekä määriteltiin palvelun jatkokehitystarpeet.
Resumo:
The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.
Resumo:
Business plans are made when establishing new company or when organizations launch new product or services. In this Master Thesis was examined the elements are included in the business plan and emphasized. Business plan is a wide document and can also contain company specific information, the literature review was restricted into three areas which were investigated from the relating literature and articles. The selected areas were Market Segmentation and Targeting, Competitive Environment, and Market Positioning and Strategy. The different business plan models were investigated by interviewing companies who operates in a different industry sectors from each other’s. The models were compared to each other and to the findings from literature. Based on interview results and literature findings, the business plan for fibre based packaging. The created business plan contains three selected areas. It was found that the selected business plan elements can be found from the interviewed companies’ business plans. The market segmentation was done by comparing the market share to known total market size. When analyzing the competitive environment, there was no one selected model in use. The tools to evaluate competitive environment was selected parts from both SWOT analysis and Porter’s five forces model in applicable part. Based on interview results, it can be state that the company or organization should find and built its own model for business plans. In order to receive the benefits for future planning, the company should use the same model for long time.
Resumo:
This research studied the project performance measurement from the perspective of strategic management. The objective was to find a generic model for project performance measurement that emphasizes strategy and decision making. Research followed the guidelines of a constructive research methodology. As a result, the study suggests a model that measures projects with multiple meters during and after projects. Measurement after the project is suggested to be linked to the strategic performance measures of a company. The measurement should be conducted with centralized project portfolio management e.g. using the project management office in the organization. Metrics, after the project, measure the project’s actual benefit realization. During the project, the metrics are universal and they measure the accomplished objectives relation to costs, schedule and internal resource usage. Outcomes of these measures should be forecasted by using qualitative or stochastic methods. Solid theoretical background for the model was found from the literature that covers the subjects of performance measurement, projects and uncertainty. The study states that the model can be implemented in companies. This statement is supported by empirical evidence from a single case study. The gathering of empiric evidence about the actual usefulness of the model in companies is left to be done by the evaluative research in the future.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.