56 resultados para Models and Methods
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
Tämän tutkimuksen tavoitteena oli tutkia langattomien internet palveluiden arvoverkkoa ja liiketoimintamalleja. Tutkimus oli luonteeltaan kvalitatiivinen ja siinä käytettiin strategiana konstruktiivista case-tutkimusta. Esimerkkipalveluna oli Treasure Hunters matkapuhelinpeli. Tutkimus muodostui teoreettisesta ja empiirisestä osasta. Teoriaosassa liitettiin innovaatio, liiketoimintamallit ja arvoverkko käsitteellisesti toisiinsa, sekä luotiin perusta liiketoimintamallien kehittämiselle. Empiirisessä osassa keskityttiin ensin liiketoimintamallien luomiseen kehitettyjen innovaatioiden pohjalta. Lopuksi pyrittiin määrittämään arvoverkko palvelun toteuttamiseksi. Tutkimusmenetelminä käytettiin innovaatiosessiota, haastatteluja ja lomakekyselyä. Tulosten pohjalta muodostettiin useita liiketoimintakonsepteja sekä kuvaus arvoverkon perusmallista langattomille peleille. Loppupäätelmänä todettiin että langattomat palvelut vaativat toteutuakseen useista toimijoista koostuvan arvoverkon.
Resumo:
Value chain collaboration has been a prevailing topic for research, and there is a constantly growing interest in developing collaborative models for improved efficiency in logistics. One area of collaboration is demand information management, which enables improved visibility and decrease of inventories in the value chain. Outsourcing of non-core competencies has changed the nature of collaboration from intra-enterprise to cross-enterprise activity, and this together with increasing competition in the globalizing markets have created a need for methods and tools for collaborative work. The retailer part in the value chain of consumer packaged goods (CPG) has been studied relatively widely, proven models have been defined, and there exist several best practice collaboration cases. The information and communications technology has developed rapidly, offering efficient solutions and applications to exchange information between value chain partners. However, the majority of CPG industry still works with traditional business models and practices. This concerns especially companies operating in the upstream of the CPG value chain. Demand information for consumer packaged goods originates at retailers' counters, based on consumers' buying decisions. As this information does not get transferred along the value chain towards the upstream parties, each player needs to optimize their part, causing safety margins for inventories and speculation in purchasing decisions. The safety margins increase with each player, resulting in a phenomenon known as the bullwhip effect. The further the company is from the original demand information source, the more distorted the information is. This thesis concentrates on the upstream parts of the value chain of consumer packaged goods, and more precisely the packaging value chain. Packaging is becoming a part of the product with informative and interactive features, and therefore is not just a cost item needed to protect the product. The upstream part of the CPG value chain is distinctive, as the product changes after each involved party, and therefore the original demand information from the retailers cannot be utilized as such – even if it were transferred seamlessly. The objective of this thesis is to examine the main drivers for collaboration, and barriers causing the moderate adaptation level of collaborative models. Another objective is to define a collaborative demand information management model and test it in a pilot business situation in order to see if the barriers can be eliminated. The empirical part of this thesis contains three parts, all related to the research objective, but involving different target groups, viewpoints and research approaches. The study shows evidence that the main barriers for collaboration are very similar to the barriers in the lower part of the same value chain; lack of trust, lack of business case and lack of senior management commitment. Eliminating one of them – the lack of business case – is not enough to eliminate the two other barriers, as the operational model in this thesis shows. The uncertainty of the future, fear of losing an independent position in purchasing decision making and lack of commitment remain strong enough barriers to prevent the implementation of the proposed collaborative business model. The study proposes a new way of defining the value chain processes: it divides the contracting and planning process into two processes, one managing the commercial parts and the other managing the quantity and specification related issues. This model can reduce the resistance to collaboration, as the commercial part of the contracting process would remain the same as in the traditional model. The quantity/specification-related issues would be managed by the parties with the best capabilities and resources, as well as access to the original demand information. The parties in between would be involved in the planning process as well, as their impact for the next party upstream is significant. The study also highlights the future challenges for companies operating in the CPG value chain. The markets are becoming global, with toughening competition. Also, the technology development will most likely continue with a speed exceeding the adaptation capabilities of the industry. Value chains are also becoming increasingly dynamic, which means shorter and more agile business relationships, and at the same time the predictability of consumer demand is getting more difficult due to shorter product life cycles and trends. These changes will certainly have an effect on companies' operational models, but it is very difficult to estimate when and how the proven methods will gain wide enough adaptation to become standards.
Resumo:
The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.
Resumo:
In general, models of ecological systems can be broadly categorized as ’top-down’ or ’bottom-up’ models, based on the hierarchical level that the model processes are formulated on. The structure of a top-down, also known as phenomenological, population model can be interpreted in terms of population characteristics, but it typically lacks an interpretation on a more basic level. In contrast, bottom-up, also known as mechanistic, population models are derived from assumptions and processes on a more basic level, which allows interpretation of the model parameters in terms of individual behavior. Both approaches, phenomenological and mechanistic modelling, can have their advantages and disadvantages in different situations. However, mechanistically derived models might be better at capturing the properties of the system at hand, and thus give more accurate predictions. In particular, when models are used for evolutionary studies, mechanistic models are more appropriate, since natural selection takes place on the individual level, and in mechanistic models the direct connection between model parameters and individual properties has already been established. The purpose of this thesis is twofold. Firstly, a systematical way to derive mechanistic discrete-time population models is presented. The derivation is based on combining explicitly modelled, continuous processes on the individual level within a reproductive period with a discrete-time maturation process between reproductive periods. Secondly, as an example of how evolutionary studies can be carried out in mechanistic models, the evolution of the timing of reproduction is investigated. Thus, these two lines of research, derivation of mechanistic population models and evolutionary studies, are complementary to each other.
Resumo:
The aim of this work is to compare two families of mathematical models for their respective capability to capture the statistical properties of real electricity spot market time series. The first model family is ARMA-GARCH models and the second model family is mean-reverting Ornstein-Uhlenbeck models. These two models have been applied to two price series of Nordic Nord Pool spot market for electricity namely to the System prices and to the DenmarkW prices. The parameters of both models were calibrated from the real time series. After carrying out simulation with optimal models from both families we conclude that neither ARMA-GARCH models, nor conventional mean-reverting Ornstein-Uhlenbeck models, even when calibrated optimally with real electricity spot market price or return series, capture the statistical characteristics of the real series. But in the case of less spiky behavior (System prices), the mean-reverting Ornstein-Uhlenbeck model could be seen to partially succeeded in this task.
Resumo:
Transitional flow past a three-dimensional circular cylinder is a widely studied phenomenon since this problem is of interest with respect to many technical applications. In the present work, the numerical simulation of flow past a circular cylinder, performed by using a commercial CFD code (ANSYS Fluent 12.1) with large eddy simulation (LES) and RANS (κ - ε and Shear-Stress Transport (SST) κ - ω! model) approaches. The turbulent flow for ReD = 1000 & 3900 is simulated to investigate the force coefficient, Strouhal number, flow separation angle, pressure distribution on cylinder and the complex three dimensional vortex shedding of the cylinder wake region. The numerical results extracted from these simulations have good agreement with the experimental data (Zdravkovich, 1997). Moreover, grid refinement and time-step influence have been examined. Numerical calculations of turbulent cross-flow in a staggered tube bundle continues to attract interest due to its importance in the engineering application as well as the fact that this complex flow represents a challenging problem for CFD. In the present work a time dependent simulation using κ – ε, κ - ω! and SST models are performed in two dimensional for a subcritical flow through a staggered tube bundle. The predicted turbulence statistics (mean and r.m.s velocities) have good agreement with the experimental data (S. Balabani, 1996). Turbulent quantities such as turbulent kinetic energy and dissipation rate are predicted using RANS models and compared with each other. The sensitivity of grid and time-step size have been analyzed. Model constants sensitivity study have been carried out by adopting κ – ε model. It has been observed that model constants are very sensitive to turbulence statistics and turbulent quantities.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.
Resumo:
The Standard Model of particle physics is currently the best description of fundamental particles and their interactions. All particles save the Higgs boson have been observed in particle accelerator experiments over the years. Despite the predictive power the Standard Model there are many phenomena that the scenario does not predict or explain. Among the most prominent dilemmas is matter-antimatter asymmetry, and much effort has been made in formulating scenarios that accurately predict the correct amount of matter-antimatter asymmetry in the universe. One of the most appealing explanations is baryogenesis via leptogenesis which not only serves as a mechanism of producing excess matter over antimatter but can also explain why neutrinos have very small non-zero masses. Interesting leptogenesis scenarios arise when other possible candidates of theories beyond the Standard Model are brought into the picture. In this thesis, we have studied leptogenesis in an extra dimensional framework and in a modified version of supersymmetric Standard Model. The first chapters of this thesis introduce the standard cosmological model, observations made on the photon to baryon ratio and necessary preconditions for successful baryogenesis. Baryogenesis via leptogenesis is then introduced and its connection to neutrino physics is illuminated. The final chapters concentrate on extra dimensional theories and supersymmetric models and their ability to accommodate leptogenesis. There, the results of our research are also presented.
Resumo:
This thesis presents a three-dimensional, semi-empirical, steady state model for simulating the combustion, gasification, and formation of emissions in circulating fluidized bed (CFB) processes. In a large-scale CFB furnace, the local feeding of fuel, air, and other input materials, as well as the limited mixing rate of different reactants produce inhomogeneous process conditions. To simulate the real conditions, the furnace should be modelled three-dimensionally or the three-dimensional effects should be taken into account. The only available methods for simulating the large CFB furnaces three-dimensionally are semi-empirical models, which apply a relatively coarse calculation mesh and a combination of fundamental conservation equations, theoretical models and empirical correlations. The number of such models is extremely small. The main objective of this work was to achieve a model which can be applied to calculating industrial scale CFB boilers and which can simulate all the essential sub-phenomena: fluid dynamics, reactions, the attrition of particles, and heat transfer. The core of the work was to develop the model frame and the required sub-models for determining the combustion and sorbent reactions. The objective was reached, and the developed model was successfully used for studying various industrial scale CFB boilers combusting different types of fuel. The model for sorbent reactions, which includes the main reactions for calcitic limestones, was applied for studying the new possible phenomena occurring in the oxygen-fired combustion. The presented combustion and sorbent models and principles can be utilized in other model approaches as well, including other empirical and semi-empirical model approaches, and CFD based simulations. The main achievement is the overall model frame which can be utilized for the further development and testing of new sub-models and theories, and for concentrating the knowledge gathered from the experimental work carried out at bench scale, pilot scale and industrial scale apparatus, and from the computational work performed by other modelling methods.
Resumo:
Tämä taktiikan tutkimus keskittyy tietokoneavusteisen simuloinnin laskennallisiin menetelmiin, joita voidaan käyttää taktisen tason sotapeleissä. Työn tärkeimmät tuotokset ovat laskennalliset mallit todennäköisyyspohjaisen analyysin mahdollistaviin taktisen tason taistelusimulaattoreihin, joita voidaan käyttää vertailevaan analyysiin joukkue-prikaatitason tarkastelutilanteissa. Laskentamallit keskittyvät vaikuttamiseen. Mallit liittyvät vahingoittavan osuman todennäköisyyteen, jonka perusteella vaikutus joukossa on mallinnettu tilakoneina ja Markovin ketjuina. Edelleen näiden tulokset siirretään tapahtumapuuanalyysiin operaation onnistumisen todennäköisyyden osalta. Pienimmän laskentayksikön mallinnustaso on joukkue- tai ryhmätasolla, jotta laskenta-aika prikaatitason sotapelitarkasteluissa pysyisi riittävän lyhyenä samalla, kun tulokset ovat riittävän tarkkoja suomalaiseen maastoon. Joukkueiden mies- ja asejärjestelmävahvuudet ovat jakaumamuodossa, eivätkä yksittäisiä lukuja. Simuloinnin integroinnissa voidaan käyttää asejärjestelmäkohtaisia predictor corrector –parametreja, mikä mahdollistaa aika-askelta lyhytaikaisempien taistelukentän ilmiöiden mallintamisen. Asemallien pohjana ovat aiemmat tutkimukset ja kenttäkokeet, joista osa kuuluu tähän väitöstutkimukseen. Laskentamallien ohjelmoitavuus ja käytettävyys osana simulointityökalua on osoitettu tekijän johtaman tutkijaryhmän ohjelmoiman ”Sandis”- taistelusimulointiohjelmiston avulla, jota on kehitetty ja käytetty Puolustusvoimien Teknillisessä Tutkimuslaitoksessa. Sandikseen on ohjelmoitu karttakäyttöliittymä ja taistelun kulkua simuloivia laskennallisia malleja. Käyttäjä tai käyttäjäryhmä tekee taktiset päätökset ja syöttää nämä karttakäyttöliittymän avulla simulointiin, jonka tuloksena saadaan kunkin joukkuetason peliyksikön tappioiden jakauma, keskimääräisten tappioiden osalta kunkin asejärjestelmän aiheuttamat tappiot kuhunkin maaliin, ammuskulutus ja radioyhteydet ja niiden tila sekä haavoittuneiden evakuointi-tilanne joukkuetasolta evakuointisairaalaan asti. Tutkimuksen keskeisiä tuloksia (kontribuutio) ovat 1) uusi prikaatitason sotapelitilanteiden laskentamalli, jonka pienin yksikkö on joukkue tai ryhmä; 2) joukon murtumispisteen määritys tappioiden ja haavoittuneiden evakuointiin sitoutuvien taistelijoiden avulla; 3) todennäköisyyspohjaisen riskianalyysin käyttömahdollisuus vertailevassa tutkimuksessa sekä 4) kokeellisesti testatut tulen vaikutusmallit ja 5) toimivat integrointiratkaisut. Työ rajataan maavoimien taistelun joukkuetason todennäköisyysjakaumat luovaan laskentamalliin, kenttälääkinnän malliin ja epäsuoran tulen malliin integrointimenetelmineen sekä niiden antamien tulosten sovellettavuuteen. Ilmasta ja mereltä maahan -asevaikutusta voidaan tarkastella, mutta ei ilma- ja meritaistelua. Menetelmiä soveltavan Sandis -ohjelmiston malleja, käyttötapaa ja ohjelmistotekniikkaa kehitetään edelleen. Merkittäviä jatkotutkimuskohteita mallinnukseen osalta ovat muun muassa kaupunkitaistelu, vaunujen kaksintaistelu ja maaston vaikutus tykistön tuleen sekä materiaalikulutuksen arviointi.
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
Choosing the right supplier is crucial for long-term business prospects and profitability. Thus organizational buyers are naturally very interested in how they can select the right supplier for their needs. Likewise, suppliers are interested in knowing how their customers make purchasing decisions in order to effectively sell and market to them. From the point of view of the textile and clothing (T&C) industry, regulatory changes and increasing low-cost and globalization pressures have led to the rise of low-cost production locations India and China as the world’s largest T&C producers. This thesis will examine T&C trade between Finland and India specifically in the context of non-industrial T&C products. Its main research problem asks: what perceptions do Finnish T&C industry buyers hold of India and Indian suppliers? B2B buyers use various supplier selection models and criteria in making their purchase decisions. A significant amount of research has been done into supplier selection practices, and in the context of international trade, country of origin (COO) perceptions specifically have garnered much attention. This thesis uses a mixed methods approach (online questionnaire and in-depth interviews) to evaluate Finnish T&C buyers’ supplier selection criteria, COO perceptions of India and experiences of Indian suppliers. It was found that the most important supplier selection criteria used by Finnish T&C buyers are quality, reliability and cost. COO perceptions were not found to be influential in purchasing process. Indian T&C suppliers’ strengths were found to be low cost, flexibility and a history of traditional T&C expertise. Their weaknesses include product quality and unreliable delivery times. Overall, the main challenges that need to be overcome by Indian T&C companies are logistical difficulties and the cost vs. quality trade-off. Despite positive perceptions of India for cost, the overall value offered by Indian T&C products was perceived to be low due to poor quality. Unreliable delivery time experiences also affected buyer’s reliability perceptions of Indian suppliers. The main limiting factors of this thesis relate to the small sample size used in the research. This limits the generalizability of results and the ability to evaluate the reliability and validity of some of the research instruments.