19 resultados para Suppliers selection problem
Resumo:
Supplier evaluation and selection problem has been studied extensively. Various decision making approaches have been proposed to tackle the problem. In contemporary supply chain management, the performance of potential suppliers is evaluated against multiple criteria rather than considering a single factor-cost. This paper reviews the literature of the multi-criteria decision making approaches for supplier evaluation and selection. Related articles appearing in the international journals from 2000 to 2008 are gathered and analyzed so that the following three questions can be answered: (i) Which approaches were prevalently applied? (ii) Which evaluating criteria were paid more attention to? (iii) Is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the multi-criteria decision making approaches are better than the traditional cost-based approach, but also aids the researchers and decision makers in applying the approaches effectively.
Resumo:
thesis is developed from a real life application of performance evaluation of small and medium-sized enterprises (SMEs) in Vietnam. The thesis presents two main methodological developments on evaluation of dichotomous environment variable impacts on technical efficiency. Taking into account the selection bias the thesis proposes a revised frontier separation approach for the seminal Data Envelopment Analysis (DEA) model which was developed by Charnes, Cooper, and Rhodes (1981). The revised frontier separation approach is based on a nearest neighbour propensity score matching pairing treated SMEs with their counterfactuals on the propensity score. The thesis develops order-m frontier conditioning on propensity score from the conditional order-m approach proposed by Cazals, Florens, and Simar (2002), advocated by Daraio and Simar (2005). By this development, the thesis allows the application of the conditional order-m approach with a dichotomous environment variable taking into account the existence of the self-selection problem of impact evaluation. Monte Carlo style simulations have been built to examine the effectiveness of the aforementioned developments. Methodological developments of the thesis are applied in empirical studies to evaluate the impact of training programmes on the performance of food processing SMEs and the impact of exporting on technical efficiency of textile and garment SMEs of Vietnam. The analysis shows that training programmes have no significant impact on the technical efficiency of food processing SMEs. Moreover, the analysis confirms the conclusion of the export literature that exporters are self selected into the sector. The thesis finds no significant impact from exporting activities on technical efficiency of textile and garment SMEs. However, large bias has been eliminated by the proposed approach. Results of empirical studies contribute to the understanding of the impact of different environmental variables on the performance of SMEs. It helps policy makers to design proper policy supporting the development of Vietnamese SMEs.
Resumo:
This paper investigates whether government support can act to increase exporting activity. We use a uniquely rich data set on Irish manufacturing plants and employ an empirical strategy that combines a nonparametric matching procedure with a difference-in-differences estimator in order to deal with the potential selection problem inherent in the analysis. Our results suggest that if grants are large enough, they can encourage already exporting firms to compete more effectively on the international market. However, there is little evidence that grants encourage nonexporters to start exporting.
Resumo:
Purpose – The purpose of this paper is to explore the importance of host country networks and organisation of production in the context of international technology transfer that accompanies foreign direct investment (FDI). Design/methodology/approach – The empirical analysis is based on unbalanced panel data covering Japanese firms active in two-digit manufacturing sectors over a seven-year period. Given the self-selection problem affecting past sectoral-level studies, using firm-level panel data is a prerequisite to provide robust empirical evidence. Findings – While Japan is thought of as being a technologically advanced country, the results show that vertical productivity spillovers from FDI occur in Japan, but they are sensitive to technological differences between domestic firms and the idiosyncratic Japanese institutional network. FDI in vertically organised keiretsu sectors generates inter-industry spillovers through backward and forward linkages, while FDI within sectors linked to vertical keiretsu activities adversely affects domestic productivity. Overall, our results suggest that the role of vertical keiretsu is more prevalent than that of horizontal keiretsu. Originality/value – Japan’s industrial landscape has been dominated by institutional clusters or networks of inter-firm organisations through reciprocated, direct and indirect ties. However, interactions between inward investors and such institutionalised networks in the host economy are seldom explored. The role and characteristics of local business groups, in the form of keiretsu networks, have been investigated to determine the scale and scope of spillovers from inward FDI to Japanese establishments. This conceptualisation depends on the institutional mechanism and the market structure through which host economies absorb and exploit FDI.
Resumo:
The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.
Resumo:
Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
We address the important bioinformatics problem of predicting protein function from a protein's primary sequence. We consider the functional classification of G-Protein-Coupled Receptors (GPCRs), whose functions are specified in a class hierarchy. We tackle this task using a novel top-down hierarchical classification system where, for each node in the class hierarchy, the predictor attributes to be used in that node and the classifier to be applied to the selected attributes are chosen in a data-driven manner. Compared with a previous hierarchical classification system selecting classifiers only, our new system significantly reduced processing time without significantly sacrificing predictive accuracy.
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
This paper suggests a data envelopment analysis (DEA) model for selecting the most efficient alternative in advanced manufacturing technology in the presence of both cardinal and ordinal data. The paper explains the problem of using an iterative method for finding the most efficient alternative and proposes a new DEA model without the need of solving a series of LPs. A numerical example illustrates the model, and an application in technology selection with multi-inputs/multi-outputs shows the usefulness of the proposed approach. © 2012 Springer-Verlag London Limited.
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.
Resumo:
This article presents a potential method to assist developers of future bioenergy schemes when selecting from available suppliers of biomass materials. The method aims to allow tacit requirements made on biomass suppliers to be considered at the design stage of new developments. The method used is a combination of the Analytical Hierarchy Process and the Quality Function Deployment methods (AHP-QFD). The output of the method is a ranking and relative weighting of the available suppliers which could be used to improve optimization algorithms such as linear and goal programming. The paper is at a conceptual stage and no results have been obtained. The aim is to use the AHP-QFD method to bridge the gap between treatment of explicit and tacit requirements of bioenergy schemes; allowing decision makers to identify the most successful supply strategy available.