33 resultados para Hammer selection
Resumo:
Acquisitions are a way for a company to grow, enter new geographical areas, buy out competition or diversify. Acquisitions have recently grown in both size and value. Despite of this, only approximately 25 percent of acquisitions reach their targets and goals. Companies making serial acquisitions seem to be exceptionally successful and succeed in the majority of their acquisitions. The main research question this study aims to answer is: “What issues impact the selection of acquired companies from the point of view of a serial acquirer? The main research question is answered through three sub questions: “What is a buying process for a serial acquirer like?”, “What are the motives for a serial acquirer to buy companies?” and “What is the connection between company strategy and serial acquisitions?”. The case company KONE is a globally operating company which mainly produces and maintains elevators and escalators. Its headquarter is located in Helsinki, Finland. The company has a long history of making acquisitions and does 20- 30 acquisitions a year. By a key person interview, the acquisition process of the case company is compared with the literature about successful serial acquirers. The acquisition motives in this case are reflected upon three of the acquisition motive theories by Trautwein: efficiency theory, monopoly theory and valuation theory. The linkage between serial acquisitions and company strategy is studied through the key person interview. The main research findings are that the acquisition process of KONE is compatible with a successful acquisition process recognized in literature (RAID). This study confirms the efficiency theory as an acquisition motive and more closely the operational synergies. The monopoly theory can only vaguely be supported by this study, but cannot be totally rejected because of the structure of the industry. The valuation theory does not get any support in this study and can therefore be rejected. The linkage between company strategy and serial acquisitions is obvious and making acquisitions can be seen as growth strategy and a part of other company strategies.
Resumo:
This thesis examines the application of data envelopment analysis as an equity portfolio selection criterion in the Finnish stock market during period 2001-2011. A sample of publicly traded firms in the Helsinki Stock Exchange is examined in this thesis. The sample covers the majority of the publicly traded firms in the Helsinki Stock Exchange. Data envelopment analysis is used to determine the efficiency of firms using a set of input and output financial parameters. The set of financial parameters consist of asset utilization, liquidity, capital structure, growth, valuation and profitability measures. The firms are divided into artificial industry categories, because of the industry-specific nature of the input and output parameters. Comparable portfolios are formed inside the industry category according to the efficiency scores given by the DEA and the performance of the portfolios is evaluated with several measures. The empirical evidence of this thesis suggests that with certain limitations, data envelopment analysis can successfully be used as portfolio selection criterion in the Finnish stock market when the portfolios are rebalanced at annual frequency according to the efficiency scores given by the data envelopment analysis. However, when the portfolios were rebalanced every two or three years, the results are mixed and inconclusive.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
Service provider selection has been said to be a critical factor in the formation of supply chains. Through successful selection companies can attain competitive advantage, cost savings and more flexible operations. Service provider management is the next crucial step in outsourcing process after the selection has been made. Without proper management companies cannot be sure about the level of service they have bought and they may suffer from service provider's opportunistic behavior. In worst case scenario the buyer company may end up in locked-in situation in which it is totally dependent of the service provider. This thesis studies how the case company conducts its carrier selection process along with the criteria related to it. A model for the final selection is also provided. In addition, case company's carrier management procedures are reflected against recommendations from previous researches. The research was conducted as a qualitative case study on the principal company, Neste Oil Retail. A literature review was made on outsourcing, service provider selection and service provider management. On the basis of the literature review, this thesis ended up recommending Analytic hierarchy process as the preferred model for the carrier selection. Furthermore, Agency theory was seen to be a functional framework for carrier management in this study. Empirical part of this thesis was conducted in the case company by interviewing the key persons in the selection process, making observations and going through documentations related to the subject. According to the results from the study, both carrier selection process as well as carrier management were closely in line with suggestions from literature review. Analytic hierarchy process results revealed that the case company considers service quality as the most important criteria with financial situation and price of service following behind with almost identical weights with each other. Equipment and personnel was seen as the least important selection criterion. Regarding carrier management, the study resulted in the conclusion that the company should consider engaging more in carrier development and working towards beneficial and effective relationships. Otherwise, no major changes were recommended for the case company processes.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The aim of this research is to examine the pricing anomalies existing in the U.S. market during 1986 to 2011. The sample of stocks is divided into decile portfolios based on seven individual valuation ratios (E/P, B/P, S/P, EBIT/EV, EVITDA/EV, D/P, and CE/P) and price momentum to investigate the efficiency of individual valuation ratio and their combinations as portfolio formation criteria. This is the first time in financial literature when CE/P is employed as a constituent of composite value measure. The combinations are based on median scaled composite value measures and TOPSIS method. During the sample period value portfolios significantly outperform both the market portfolio and comparable glamour portfolios. The results show the highest return for the value portfolio that was based on the combination of S/P & CE/P ratios. The outcome of this research will increase the understanding on the suitability of different methodologies for portfolio selection. It will help managers to take advantage of the results of different methodologies in order to gain returns above the market.
Resumo:
An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.
Resumo:
The significance and impact of services in the modern global economy has become greater and there has been more demand for decades in the academic community of international business for further research into better understanding internationalisation of services. Theories based on the internationalisation of manufacturing firms have been long questioned for their applicability to services. This study aims at contributing to understanding internationalisation of services by examining how market selection decisions are made for new service products within the existing markets of a multinational financial service provider. The study focused on the factors influencing market selection and the study was conducted as a case study on a multinational financial service firm and two of its new service products. Two directors responsible for the development and internationalisation of the case service products were interviewed in guided semi-structured interviews based on themes adopted from the literature review and the outcome theoretical framework. The main empirical findings of the study suggest that the most significant factors influencing the market selection for new service products within a multinational financial service firm’s existing markets are: commitment to the new service products by both the management and the rest of the product related organisation; capability and competence by the local country organisations to adopt new services; market potential which combines market size, market structure and competitive environment; product fit to the market requirements; and enabling partnerships. Based on the empirical findings, this study suggests a framework of factors influencing market selection for new service products, and proposes further research issues and methods to test and extend the findings of this research.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.