28 resultados para Models performance
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tämä tutkimus pyrkii selvittämään, miten toimitusketjun suorituskykyä voidaan mitata kohdeyrityksessä. Supply Chain Council (SCC) on vuonna 1996 kehittänyt Supply Chain Operations Reference (SCOR) – mallin, joka mahdollistaa myös suorituskyvyn mittaamisen. Tämän tutkimuksen tarkoituksena on soveltaa SCOR-mallin suorituskyvyn mittausmallia kohdeyrityksessä. Työ on kvalitatiivinen tapaustutkimus. Työn teoriaosassa on pääasiallisesti käsitelty toimitusketjua ja suorituskyvyn mittaamista koskevaa kirjallisuutta. Mittausjärjestelmän luominen alkaa kohdeyrityksen esittelyllä. SCOR – mallin mittarit on kohdeyrityksessä rakennettu SCC:n ehdotusten mukaisesti, jotta mittareiden tulokset olisivat käyttökelpoisia myös benchmarkkausta varten. Malli sisältää 10 SCOR – mittaria, sekä muutamia muita Haltonin omia mittareita. Lopputuloksena voidaan nähdä, että SCOR – malli antaa hyvän yleiskuvan toimitusketjun suorituskyvystä, mutta kohdeyrityksessä on silti tarvetta kehittää edelleen informatiivisempia mittareita, jotka antaisivat yksityiskohtaisempaa tietoa kohdeyrityksen johdolle.
Resumo:
This thesis examines the suitability of VaR in foreign exchange rate risk management from the perspective of a European investor. The suitability of four different VaR models is evaluated in respect to have insight if VaR is a valuable tool in managing foreign exchange rate risk. The models evaluated are historical method, historical bootstrap method, variance-covariance method and Monte Carlo simulation. The data evaluated are divided into emerging and developed market currencies to have more intriguing analysis. The foreign exchange rate data in this thesis is from 31st January 2000 to 30th April 2014. The results show that the previously mentioned VaR models performance in foreign exchange risk management is not to be considered as a single tool in foreign exchange rate risk management. The variance-covariance method and Monte Carlo simulation performs poorest in both currency portfolios. Both historical methods performed better but should also be considered as an additional tool along with other more sophisticated analysis tools. A comparative study of VaR estimates and forward prices is also included in the thesis. The study reveals that regardless of the expensive hedging cost of emerging market currencies the risk captured by VaR is more expensive and thus FX forward hedging is recommended
Resumo:
Tutkimuksen tavoitteena on selvittää, esiintyykö suomeen sijoittavilla osakerahastoilla menestyksen pysyvyyttä. Tutkimusaineisto koostuu kaikista suomalaisista osakerahastoista, jotka toimivat ajanjaksolla 15.1.1998-13.1.2005. Aineisto on vapaa selviytymisvinoumasta. Suorituskyvyn mittareina käytetään CAPM-alfaa sekä kolmi- ja nelifaktori-alfaa. Empiirisessä osassa osakerahastojen menestyksen pysyvyyttä testataan Spearmanin järjestyskorrelaatiotestillä. Evidenssi menestyksen pysyvyydestä jäi vähäiseksi, vaikkakin sitä esiintyi satunnaisesti kaikilla menestysmittareilla joillakin ranking- ja sijoitusperiodin yhdistelmillä. CAPM-alfalla tarkasteltuna tilastollisesti merkitsevää menestyksen pysyvyyttä esiintyi selvästi useammin kuin muilla menestysmittareilla. Tulokset tukevat viimeaikaisia kansainvälisiä tutkimuksia, joiden mukaan menestyksen pysyvyys riippuu usein mittaustavasta. Menestysmittareina käytettyjen regressiomallien merkitsevyystestit osoittavat multifaktorimallien selittävän osakerahastojen tuottoja CAPM:a paremmin. Lisätyt muuttujat parantavat merkittävästi CAPM:n selitysvoimaa.
Resumo:
The aim of this study is to present an Activity-Based Costing spreadsheet tool for analyzing the logistics costs. The tool can be used both by customer-companies and logistics service providers. The study discusses the influence of different activity models on costs. Additionally this paper discusses about the logistical performance across the total supply chain This study is carried out using ananalytical research approach and literature material has been used for supplementing the concerned research approach. Cost structure analysis was based on the theory of activity-based management. This study was outlined to spare part logistics in machine-shop industry. The outlines of logistics services and logisticalperformance discussed in this report are based on the new logistics business concept (LMS-concept), which has been presented earlier in the Valssi-project. Oneof the aims of this study is to increase awareness of different activity modelson logistics costs. The report paints an overall picture about the business environment and requirements for the new logistics concept.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
Tutkielma tarkastelee vapaa alue konseptia osana yritysten kansainvälistä toimitusketjua. Tarkoituksena on löytää keinoja, millä tavoin vapaa alueen houkuttelevuutta voidaan lisätä yritysten näkökulmasta ja millaista liiketoimintaa yritysten on vapaa alueella mahdollista harjoittaa. Tutkielmassa etsitään tekijöitä, jotka vaikuttavat vapaa alueen menestykseen ja jotka voisivat olla sovellettavissa Kaakkois-Suomen ja Venäjän raja-alueelle ottaen huomioon vallitsevat olosuhteet ja lainsäädäntö rajoittavina tekijöinä. Menestystekijöitä ja liiketoimintamalleja haetaan tutkimalla ja analysoimalla lyhyesti muutamia olemassa olevia ja toimivia vapaa alueita. EU tullilain harmonisointi ja kansainvälisen kaupan vapautuminen vähentää vapaa alueen perinteistä merkitystä tullivapaana alueena. Sen sijaan vapaa alueet toimivat yhä enenevissä määrin logistisina keskuksina kansainvälisessä kaupassa ja tarjoavat palveluita, joiden avulla yritykset voivat parantaa logistista kilpailukykyään. Verkostoituminen, satelliitti-ratkaisut ja yhteistoiminta ovat keinoja, millä Kaakkois-Suomen alueen eri logistiikkapalvelujen tarjoajat voivat parantaa suorituskykyään ja joustavuutta kansainvälisessä toimitusketjussa.
Resumo:
After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real-life regulatory frameworks by the developed methodology and further develop regulation model from the viewpoint of the directing signals. The results of this study have played a key role in the development of the Finnish regulatory model.
Resumo:
The environmental impact of landfill is a growing concern in waste management practices. Thus, assessing the effectiveness of the solutions implemented to alter the issue is of importance. The objectives of the study were to provide an insight of landfill advantages, and to consolidate landfill gas importance among others alternative fuels. Finally, a case study examining the performances of energy production from a land disposal at Ylivieska was carried out to ascertain the viability of waste to energy project. Both qualitative and quantitative methods were applied. The study was conducted in two parts; the first was the review of literatures focused on landfill gas developments. Specific considerations were the conception of mechanism governing the variability of gas production and the investigation of mathematical models often used in landfill gas modeling. Furthermore, the analysis of two main distributed generation technologies used to generate energy from landfill was carried out. The review of literature revealed a high influence of waste segregation and high level of moisture content for waste stabilization process. It was found that the enhancement in accuracy for forecasting gas rate generation can be done with both mathematical modeling and field test measurements. The result of the case study mainly indicated the close dependence of the power output with the landfill gas quality and the fuel inlet pressure.
Resumo:
Tutkimuksen tavoite oli selvittää suorituskyvyn mittaamista, mittareita ja niiden suunnittelua tukku- ja jakeluliiketoiminnassa. Kriittisten menestystekijöiden mittarit auttavat yritystä kohti yhteistä päämäärää. Kriittisten menestystekijöiden mittarit ovat usein yhdistetty strategiseen suunnitteluun ja implementointiin ja niillä on yhtäläisyyksiä monien strategisten työkalujen kun Balanced scorecardin kanssa. Tutkimus ongelma voidaan esittää kysymyksen muodossa. •Mitkä ovat Oriola KD:n pitkänaikavälin tavoitteita tukevat kriittisten menestystekijöiden mittarit (KPIs) toimittajan ja tuotevalikoiman mittaamisessa? Tutkimus on jaettu kirjalliseen ja empiiriseen osaan. Kirjallisuus katsaus käsittelee aikaisempaa tutkimusta strategian, toimitusketjun hallinnan, toimittajan arvioinnin ja erilaisten suorituskyvyn mittaamisjärjestelmien osalta. Empiirinen osuus etenee nykytila-analyysista ehdotettuihin kriittisten menestystekijöiden mittareihin, jotka ovat kehitetty kirjallisuudesta löydetyn mallin avulla. Tutkimuksen lopputuloksena ovat case yrityksen tarpeisiin kehitetyt kriittisten menestystekijöiden mittarit toimittajan ja tuotevalikoiman arvioinnissa.
Resumo:
Cutting of thick section stainless steel and mild steel, and medium section aluminium using the high power ytterbium fibre laser has been experimentally investigated in this study. Theoretical models of the laser power requirement for cutting of a metal workpiece and the melt removal rate were also developed. The calculated laser power requirement was correlated to the laser power used for the cutting of 10 mm stainless steel workpiece and 15 mm mild steel workpiece using the ytterbium fibre laser and the CO2 laser. Nitrogen assist gas was used for cutting of stainless steel and oxygen was used for mild steel cutting. It was found that the incident laser power required for cutting at a given cutting speed was lower for fibre laser cutting than for CO2 laser cutting indicating a higher absorptivity of the fibre laser beam by the workpiece and higher melting efficiency for the fibre laser beam than for the CO2 laser beam. The difficulty in achieving an efficient melt removal during high speed cutting of the 15 mmmild steel workpiece with oxygen assist gas using the ytterbium fibre laser can be attributed to the high melting efficiency of the ytterbium fibre laser. The calculated melt flow velocity and melt film thickness correlated well with the location of the boundary layer separation point on the 10 mm stainless steel cut edges. An increase in the melt film thickness caused by deceleration of the melt particles in the boundary layer by the viscous shear forces results in the flow separation. The melt flow velocity increases with an increase in assist gas pressure and cut kerf width resulting in a reduction in the melt film thickness and the boundary layer separation point moves closer to the bottom cut edge. The cut edge quality was examined by visual inspection of the cut samples and measurement of the cut kerf width, boundary layer separation point, cut edge squareness (perpendicularity) deviation, and cut edge surface roughness as output quality factors. Different regions of cut edge quality in 10 mm stainless steel and 4 mm aluminium workpieces were defined for different combinations of cutting speed and laserpower.Optimization of processing parameters for a high cut edge quality in 10 mmstainless steel was demonstrated
Resumo:
The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.
Resumo:
The goal of this study is to examine the intelligent home business network in order to determine which part of the network has the best financial abilities to produce new business models and products/services by using financial statement analysis. A group of 377 studied limited companies is divided into four examined segments based on their offering in producing intelligent homes. The segments are customer service providers, system integrators, subsystem suppliers and component suppliers. Eight different key figures are calculated from each of the companies to get a comprehensive view of their financial performances, after which each of the segments is studied statistically to determine the performances of the whole segments. The actual performance differences between the segments are calculated by using the multi-criteria decision analysis method in which the performances of the key figures are graded and each key figure is weighted according to its importance for the goal of the study. The results of this analysis showed that subsystem suppliers have the best financial performance. Second best are system integrators, third are customer service providers and fourth component suppliers. None of the segments were strikingly poor, but even component suppliers were rather reasonable in their performance; so, it can be said that no part of the intelligent home business network has remarkably inadequate financial abilities to develop new business models and products/services.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.