57 resultados para Beam angle selection
Resumo:
It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.
Resumo:
Acquisitions are a way for a company to grow, enter new geographical areas, buy out competition or diversify. Acquisitions have recently grown in both size and value. Despite of this, only approximately 25 percent of acquisitions reach their targets and goals. Companies making serial acquisitions seem to be exceptionally successful and succeed in the majority of their acquisitions. The main research question this study aims to answer is: “What issues impact the selection of acquired companies from the point of view of a serial acquirer? The main research question is answered through three sub questions: “What is a buying process for a serial acquirer like?”, “What are the motives for a serial acquirer to buy companies?” and “What is the connection between company strategy and serial acquisitions?”. The case company KONE is a globally operating company which mainly produces and maintains elevators and escalators. Its headquarter is located in Helsinki, Finland. The company has a long history of making acquisitions and does 20- 30 acquisitions a year. By a key person interview, the acquisition process of the case company is compared with the literature about successful serial acquirers. The acquisition motives in this case are reflected upon three of the acquisition motive theories by Trautwein: efficiency theory, monopoly theory and valuation theory. The linkage between serial acquisitions and company strategy is studied through the key person interview. The main research findings are that the acquisition process of KONE is compatible with a successful acquisition process recognized in literature (RAID). This study confirms the efficiency theory as an acquisition motive and more closely the operational synergies. The monopoly theory can only vaguely be supported by this study, but cannot be totally rejected because of the structure of the industry. The valuation theory does not get any support in this study and can therefore be rejected. The linkage between company strategy and serial acquisitions is obvious and making acquisitions can be seen as growth strategy and a part of other company strategies.
Resumo:
The capacity of beams is a very important factor in the study of durability of structures and structural members. The capacity of a high-strength steel I-beam made of S960 QC was investigated in this study. The investigation included assessment of the service limits and ultimate limits of the steel beam. The thesis was done according to European standards for steel structures, Eurocode 3. An analytical method was used to determine the throat thickness, deformation, elastic and plastic moment capacities as well as the fatigue life of the beam. The results of the analytical method were compared with those obtained by Finite Element Analysis (FEA). Elastic moment capacity obtained by the analytical method was 172 kNm. FEA and the analytical method predicted the maximum lateral-torsional buckling (LTB) capacity in the range of 90-93 kNm and the probability of failure as a result of LTB is estimated to be 50%. The lateral buckling capacity meant that the I-beam can carry a safe load of 300 kN instead of the initial load of 600 kN. The beam is liable to fail shortly after exceeding the elastic moment capacity. Based on results in of the different approaches, it was noted that FEA predicted higher deformation values on the load-deformation curve than the analytical results. However, both FEA and the analytical methods predicted identical results for nominal stress range and moment capacities. Fatigue life was estimated to be in the range of 53000-64000 cycles for bending stress range using crack propagation equation and strength-life approach. As Eurocode 3 is limited to steel grades up to S690, results for S960 must be verified with experimental data and appropriate design rules.
Resumo:
This thesis examines the application of data envelopment analysis as an equity portfolio selection criterion in the Finnish stock market during period 2001-2011. A sample of publicly traded firms in the Helsinki Stock Exchange is examined in this thesis. The sample covers the majority of the publicly traded firms in the Helsinki Stock Exchange. Data envelopment analysis is used to determine the efficiency of firms using a set of input and output financial parameters. The set of financial parameters consist of asset utilization, liquidity, capital structure, growth, valuation and profitability measures. The firms are divided into artificial industry categories, because of the industry-specific nature of the input and output parameters. Comparable portfolios are formed inside the industry category according to the efficiency scores given by the DEA and the performance of the portfolios is evaluated with several measures. The empirical evidence of this thesis suggests that with certain limitations, data envelopment analysis can successfully be used as portfolio selection criterion in the Finnish stock market when the portfolios are rebalanced at annual frequency according to the efficiency scores given by the data envelopment analysis. However, when the portfolios were rebalanced every two or three years, the results are mixed and inconclusive.
Resumo:
Gas shielding plays an important role in laser welding phenomena. This is because it does not only provide shielding against oxidization but it has an effect in beam absorption and thus welds penetration. The goal of this thesis is to study and compare the effects of different shielding gas feeding methods in laser welding of steel. Research method is a literature survey. It is found that the inclination angle and the arrangement of the gas feeding nozzles affect the phenomena significantly. It is suggested that by designing shielding gas feeding case specifically better welding results can be obtained.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
The interferometer for low resolution portable Fourier Transform middle infrared spectrometer was developed and studied experimentally. The final aim was a concept for a commercial prototype. Because of the portability, the interferometer should be compact sized and insensitive to the external temperature variations and mechanical vibrations. To minimise the size and manufacturing costs, Michelson interferometer based on plane mirrors and porch swing bearing was selected and no dynamic alignment system was applied. The driving motor was a linear voice coil actuator to avoid mechanical contact of the moving parts. The driving capability for low mirror driving velocities required by the photoacoustic detectors was studied. In total, four versions of such an interferometer were built and experimentally studied. The thermal stability during the external temperature variations and the alignment stability over the mirror travel were measured using the modulation depth of the wide diameter laser beam. Method for estimating the mirror tilt angle from the modulation depth was developed to take account the effect from the non-uniform intensity distribution of the laser beam. The spectrometer stability was finally studied also using the infrared radiation. The latest interferometer was assembled for the middle infrared spectrometer with spectral range from 750 cm−1 to 4500 cm−1. The interferometer size was (197 × 95 × 79) mm3 with the beam diameter of 25 mm. The alignment stability as the change of the tilt angle over the mirror travel of 3 mm was 5 μrad, which decreases the modulation depth only about 0.7 percent in infrared at 3000 cm−1. During the temperature raise, the modulation depth at 3000 cm−1 changed about 1 . . . 2 percentage units per Celsius over short term and even less than 0.2 percentage units per Celsius over the total temperature raise of 30 °C. The unapodised spectral resolution was 4 cm−1 limited by the aperture size. The best achieved signal to noise ratio was about 38 000:1 with commercially available DLaTGS detector. Although the vibration sensitivity requires still improving, the interferometer performed, as a whole, very well and could be further developed to conform all the requirements of the portable and stable spectrometer.
Resumo:
Abstract This doctoral thesis concerns the active galactic nucleus (AGN) most often referred to with the catalogue number OJ287. The publications in the thesis present new discoveries of the system in the context of a supermassive binary black hole model. In addition, the introduction discusses general characteristics of the OJ287 system and the physical fundamentals behind these characteristics. The place of OJ287 in the hierarchy of known types of AGN is also discussed. The introduction presents a large selection of fundamental physics required to have a basic understanding of active galactic nuclei, binary black holes, relativistic jets and accretion disks. Particularly the general relativistic nature of the orbits of close binaries of supermassive black holes is explored with some detail. Analytic estimates of some of the general relativistic effects in such a binary are presented, as well as numerical methods to calculate the effects more precisely. It is also shown how these results can be applied to the OJ287 system. The binary orbit model forms the basis for models of the recurring optical outbursts in the OJ287 system. In the introduction, two physical outburst models are presented in some detail and compared. The radiation hydrodynamics of the outbursts are discussed and optical light curve predictions are derived. The precursor outbursts studied in Paper III are also presented, and tied into the model of OJ287. To complete the discussion of the observable features of OJ287, the nature of the relativistic jets in the system, and in active galactic nuclei in general, is discussed. Basic physics of relativistic jets are presented, with additional detail added in the form of helical jet models. The results of Papers II, IV and V concerning the jet of OJ287 are presented, and their relation to other facets of the binary black hole model is discussed. As a whole, the introduction serves as a guide, though terse, for the physics and numerical methods required to successfully understand and simulate a close binary of supermassive black holes. For this purpose, the introduction necessarily combines a large number of both fundamental and specific results from broad disciplines like general relativity and radiation hydrodynamics. With the material included in the introduction, the publications of the thesis, which present new results with a much narrower focus, can be readily understood. Of the publications, Paper I presents newly discovered optical data points for OJ287, detected on archival astronomical plates from the Harvard College Observatory. These data points show the 1900 outburst of OJ287 for the first time. In addition, new data points covering the 1913 outburst allowed the determination of the start of the outburst with more precision than was possible before. These outbursts were then successfully numerically modelled with an N-body simulation of the OJ287 binary and accretion disc. In Paper II, mechanisms for the spin-up of the secondary black hole in OJ287 via interaction with the primary accretion disc and the magnetic fields in the system are discussed. Timescales for spin-up and alignment via both processes are estimated. It is found that the secondary black hole likely has a high spin. Paper III reports a new outburst of OJ287 in March 2013. The outburst was found to be rather similar to the ones reported in 1993 and 2004. All these outbursts happened just before the main outburst season, and are called precursor outbursts. In this paper, a mechanism was proposed for the precursor outbursts, where the secondary black hole collides with a gas cloud in the primary accretion disc corona. From this, estimates of brightness and timescales for the precursor were derived, as well as a prediction of the timing of the next precursor outburst. In Paper IV, observations from the 2004–2006 OJ287 observing program are used to investigate the existence of short periodicities in OJ287. The existence of a _50 day quasiperiodic component is confirmed. In addition, statistically significant 250 day and 3.5 day periods are found. Primary black hole accretion of a spiral density wave in the accretion disc is proposed as the source of the 50 day period, with numerical simulations supporting these results. Lorentz contracted jet re-emission is then proposed as the reason for the 3.5 day timescale. Paper V fits optical observations and mm and cm radio observations of OJ287 with a helical jet model. The jet is found to have a spine–sheath structure, with the sheath having a much lower Lorentz gamma factor than the spine. The sheath opening angle and Lorentz factor, as well as the helical wavelength of the jet are reported for the first time. Tiivistelmä Tässä väitöskirjatutkimuksessa on keskitytty tutkimaan aktiivista galaksiydintä OJ287. Väitöskirjan osana olevat tieteelliset julkaisut esittelevät OJ287-systeemistä saatuja uusia tuloksia kaksoismusta-aukkomallin kontekstissa. Väitöskirjan johdannossa käsitellään OJ287:n yleisiä ominaisuuksia ja niitä fysikaalisia perusilmiöitä, jotka näiden ominaisuuksien taustalla vaikuttavat. Johdanto selvittää myös OJ287-järjestelmän sijoittumisen aktiivisten galaksiytimien hierarkiassa. Johdannossa käydään läpi joitakin perusfysiikan tuloksia, jotka ovat tarpeen aktiivisten galaksiydinten, mustien aukkojen binäärien, relativististen suihkujen ja kertymäkiekkojen ymmärtämiseksi. Kahden toisiaan kiertävän mustan aukon keskinäisen radan suhteellisuusteoreettiset perusteet käydään läpi yksityiskohtaisemmin. Johdannossa esitetään joitakin analyyttisiä tuloksia tällaisessa binäärissä havaittavista suhteellisuusteoreettisista ilmiöistä. Myös numeerisia menetelmiä näiden ilmiöiden tarkempaan laskemiseen esitellään. Tuloksia sovelletaan OJ287-systeemiin, ja verrataan havaintoihin. OJ287:n mustien aukkojen ratamalli muodostaa pohjan systeemin toistuvien optisten purkausten malleille. Johdannossa esitellään yksityiskohtaisemmin kaksi fysikaalista purkausmallia, ja vertaillaan niitä. Purkausten säteilyhydrodynamiikka käydään läpi, ja myös ennusteet purkausten valokäyrille johdetaan. Johdannossa esitellään myös Julkaisussa III johdettu prekursoripurkausten malli, ja osoitetaan sen sopivan yhteen OJ287:n binäärimallin kanssa. Johdanto esittelee myös relativististen suihkujen fysiikkaa sekä OJ287- systeemiin liittyen että aktiivisten galaksiydinten kontekstissa yleisesti. Relativististen suihkujen perusfysiikka esitellään, kuten myös malleja kierteisistä suihkuista. Julkaisujen II, IV ja V OJ287-systeemin suihkuja koskevat tulokset esitellään binäärimallin kontekstissa. Kokonaisuutena johdanto palvelee suppeana oppaana, joka esittelee tarvittavan fysiikan ja tarpeelliset numeeriset menetelmät mustien aukkojen binäärijärjestelmän ymmärtämiseen ja simulointiin. Tätä tarkoitusta varten johdanto yhdistää sekä perustuloksia että joitakin syvällisempiä tuloksia laajoilta fysiikan osa-alueilta kuten suhteellisuusteoriasta ja säteilyhydrodynamiikasta. Johdannon sisältämän materiaalin avulla väitöskirjan julkaisut, ja niiden esittämät tulokset, ovat hyvin ymmärrettävissä. Väitöskirjan julkaisuista ensimmäinen esittelee uusia OJ287-systeemistä saatuja havaintopisteitä, jotka on paikallistettu Harvardin yliopiston observatorion arkiston valokuvauslevyiltä. OJ287:n vuonna 1900 tapahtunut purkaus nähdään ensimmäistä kertaa näissä havaintopisteissä. Uudet havaintopisteet mahdollistivat myös vuoden 1913 purkauksen alun ajoittamisen tarkemmin kuin aiemmin oli mahdollista. Havaitut purkaukset mallinnettiin onnistuneesti simuloimalla OJ287-järjestelmän mustien aukkojen paria ja kertymäkiekkoa. Julkaisussa II käsitellään mekanismeja OJ287:n sekundäärisen mustan aukon spinin kasvamiseen vuorovaikutuksessa primäärin kertymäkiekon ja systeemin magneettikenttien kanssa. Julkaisussa arvioidaan maksimispinin saavuttamisen ja spinin suunnan vakiintumisen aikaskaalat kummallakin mekanismilla. Tutkimuksessa havaitaan sekundäärin spinin olevan todennäköisesti suuri. Julkaisu III esittelee OJ287-systeemissä maaliskuussa 2013 tapahtuneen purkauksen. Purkauksen havaittiin muistuttavan vuosina 1993 ja 2004 tapahtuneita purkauksia, joita kutsutaan yhteisnimityksellä prekursoripurkaus (precursor outburst). Julkaisussa esitellään purkauksen synnylle mekanismi, jossa OJ287-systeemin sekundäärinen musta aukko osuu primäärisen mustan aukon kertymäkiekon koronassa olevaan kaasupilveen. Mekanismin avulla johdetaan arviot prekursoripurkausten kirkkaudelle ja aikaskaalalle. Julkaisussa johdetaan myös ennuste seuraavan prekursoripurkauksen ajankohdalle. Julkaisussa IV käytetään vuosina 2004–2006 kerättyjä havaintoja OJ287- systeemistä lyhyiden jaksollisuuksien etsintään. Julkaisussa varmennetaan systeemissä esiintyvä n. 50 päivän kvasiperiodisuus. Lisäksi tilastollisesti merkittävät 250 päivän ja 3,5 päivän jaksollisuudet havaitaan. Julkaisussa esitetään malli, jossa primäärisen mustan aukon kertymäkiekossa oleva spiraalitiheysaalto aiheuttaa 50 päivän jaksollisuuden. Mallista tehty numeerinen simulaatio tukee tulosta. Systeemin relativistisen suihkun emittoima aikadilatoitunut säteily esitetään aiheuttajaksi 3,5 päivän jaksollisuusaikaskaalalle. Julkaisussa V sovitetaan kierresuihkumalli OJ287-systeemistä tehtyihin optisiin havaintoihin ja millimetri- sekä senttimetriaallonpituuden radiohavaintoihin. Suihkun rakenteen havaitaan olevan kaksijakoinen ja koostuvan ytimestä ja kuoresta. Suihkun kuorella on merkittävästi pienempi Lorentzin gamma-tekijä kuin suihkun ytimellä. Kuoren avautumiskulma ja Lorentztekijä sekä suihkun kierteen aallonpituus raportoidaan julkaisussa ensimmäistä kertaa.
Resumo:
Service provider selection has been said to be a critical factor in the formation of supply chains. Through successful selection companies can attain competitive advantage, cost savings and more flexible operations. Service provider management is the next crucial step in outsourcing process after the selection has been made. Without proper management companies cannot be sure about the level of service they have bought and they may suffer from service provider's opportunistic behavior. In worst case scenario the buyer company may end up in locked-in situation in which it is totally dependent of the service provider. This thesis studies how the case company conducts its carrier selection process along with the criteria related to it. A model for the final selection is also provided. In addition, case company's carrier management procedures are reflected against recommendations from previous researches. The research was conducted as a qualitative case study on the principal company, Neste Oil Retail. A literature review was made on outsourcing, service provider selection and service provider management. On the basis of the literature review, this thesis ended up recommending Analytic hierarchy process as the preferred model for the carrier selection. Furthermore, Agency theory was seen to be a functional framework for carrier management in this study. Empirical part of this thesis was conducted in the case company by interviewing the key persons in the selection process, making observations and going through documentations related to the subject. According to the results from the study, both carrier selection process as well as carrier management were closely in line with suggestions from literature review. Analytic hierarchy process results revealed that the case company considers service quality as the most important criteria with financial situation and price of service following behind with almost identical weights with each other. Equipment and personnel was seen as the least important selection criterion. Regarding carrier management, the study resulted in the conclusion that the company should consider engaging more in carrier development and working towards beneficial and effective relationships. Otherwise, no major changes were recommended for the case company processes.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The aim of this research is to examine the pricing anomalies existing in the U.S. market during 1986 to 2011. The sample of stocks is divided into decile portfolios based on seven individual valuation ratios (E/P, B/P, S/P, EBIT/EV, EVITDA/EV, D/P, and CE/P) and price momentum to investigate the efficiency of individual valuation ratio and their combinations as portfolio formation criteria. This is the first time in financial literature when CE/P is employed as a constituent of composite value measure. The combinations are based on median scaled composite value measures and TOPSIS method. During the sample period value portfolios significantly outperform both the market portfolio and comparable glamour portfolios. The results show the highest return for the value portfolio that was based on the combination of S/P & CE/P ratios. The outcome of this research will increase the understanding on the suitability of different methodologies for portfolio selection. It will help managers to take advantage of the results of different methodologies in order to gain returns above the market.
Resumo:
An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.
Resumo:
The significance and impact of services in the modern global economy has become greater and there has been more demand for decades in the academic community of international business for further research into better understanding internationalisation of services. Theories based on the internationalisation of manufacturing firms have been long questioned for their applicability to services. This study aims at contributing to understanding internationalisation of services by examining how market selection decisions are made for new service products within the existing markets of a multinational financial service provider. The study focused on the factors influencing market selection and the study was conducted as a case study on a multinational financial service firm and two of its new service products. Two directors responsible for the development and internationalisation of the case service products were interviewed in guided semi-structured interviews based on themes adopted from the literature review and the outcome theoretical framework. The main empirical findings of the study suggest that the most significant factors influencing the market selection for new service products within a multinational financial service firm’s existing markets are: commitment to the new service products by both the management and the rest of the product related organisation; capability and competence by the local country organisations to adopt new services; market potential which combines market size, market structure and competitive environment; product fit to the market requirements; and enabling partnerships. Based on the empirical findings, this study suggests a framework of factors influencing market selection for new service products, and proposes further research issues and methods to test and extend the findings of this research.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.