969 resultados para Supplier selection problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper suggests a data envelopment analysis (DEA) model for selecting the most efficient alternative in advanced manufacturing technology in the presence of both cardinal and ordinal data. The paper explains the problem of using an iterative method for finding the most efficient alternative and proposes a new DEA model without the need of solving a series of LPs. A numerical example illustrates the model, and an application in technology selection with multi-inputs/multi-outputs shows the usefulness of the proposed approach. © 2012 Springer-Verlag London Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design of casting entails the knowledge of various interacting factors that are unique to casting process, and, quite often, product designers do not have the required foundry-specific knowledge. Casting designers normally have to liaise with casting experts in order to ensure the product designed is castable and the optimum casting method is selected. This two-way communication results in long design lead times, and lack of it can easily lead to incorrect casting design. A computer-based system at the discretion of a design engineer can, however, alleviate this problem and enhance the prospect of casting design for manufacture. This paper proposes a knowledge-based expert system approach to assist casting product designers in selecting the most suitable casting process for specified casting design requirements, during the design phase of product manufacture. A prototype expert system has been developed, based on production rules knowledge representation technique. The proposed system consists of a number of autonomous but interconnected levels, each dealing with a specific group of factors, namely, casting alloy, shape and complexity parameters, accuracy requirements and comparative costs, based on production quantity. The user interface has been so designed to allow the user to have a clear view of how casting design parameters affect the selection of various casting processes at each level; if necessary, the appropriate design changes can be made to facilitate the castability of the product being designed, or to suit the design to a preferred casting method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS subject classification: 90C31, 90A09, 49K15, 49L20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relay selection has been considered as an effective method to improve the performance of cooperative communication. However, the Channel State Information (CSI) used in relay selection can be outdated, yielding severe performance degradation of cooperative communication systems. In this paper, we investigate the relay selection under outdated CSI in a Decode-and-Forward (DF) cooperative system to improve its outage performance. We formulize an optimization problem, where the set of relays that forwards data is optimized to minimize the probability of outage conditioned on the outdated CSI of all the decodable relays’ links. We then propose a novel multiple-relay selection strategy based on the solution of the optimization problem. Simulation results show that the proposed relay selection strategy achieves large improvement of outage performance compared with the existing relay selection strategies combating outdated CSI given in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Venture capitalists can be regarded as financers of young, high-risk enterprises, seeking investments with a high growth potential and offering professional support above and beyond their capital investment. The aim of this study is to analyse the occurrence of information asymmetry between venture capital investors and entrepreneurs, with special regard to the problem of adverse selection. In the course of my empirical research, I conducted in-depth interviews with 10 venture capital investors. The aim of the research was to elicit their opinions about the situation regarding information asymmetry, how they deal with problems arising from adverse selection, and what measures they take to manage these within the investment process. In the interviews we also touched upon how investors evaluate state intervention, and how much they believe company managers are influenced by state support.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A környezeti szempontok figyelembe vétele egyre gyakoribb mind a szakirodalomban, mind a vállalati gyakorlatban. Ezt mutatja az is, hogy egyre növekszik a zöld szempontokat feldolgozó tanulmányok és szakcikkek száma. Emellett a kutatók egyre több környezeti kritériumot magukba foglaló, összetett módszertant dolgoznak ki az optimális beszállító kiválasztásához. A tanulmány célja, hogy bemutassa, illetve rendszerezze a zöld szempontokat a beszállítóértékelésben, illetve rámutasson arra, hogy mekkora eszköztár áll már most rendelkezésre a vállalatok részére, amennyiben nem csak hagyományos kritériumokat kívánnak felhasználni a beszállítóik értékelésekor. Foglalkozik azzal, hogy melyek azok a fő motivációk, amelyek miatt érdemes a vállalatoknak zöld szempontokat integrálniuk a beszállítóértékelő rendszerükbe. A kutatás alapján az derült ki, hogy nem csak a törvényi előírások a fő mozgatórugók a vállalatoknál, hogy beszállítóikat környezeti szempontból is mérjék. Ugyanakkor egyelőre a vállalatok leginkább a környezeti menedzsment rendszer meglétét vizsgálják a beszállítóiknál és kevés egyéb a szakirodalomban már megjelent zöld szempontot vesznek figyelembe. Ugyanez vonatkozik a módszertanra is, hiszen a vállalati gyakorlatból az derült ki, hogy kevésbé használják a szakirodalomban kidolgozott, összetett módszereket, hanem sokkal inkább a könnyen mérhető, kevesebb szempontot magukba foglaló eszközöket alkalmazzák. _______ Environmental criteria became more and more prevalent in the past not only in the literature but also in the companies practice. This is shown by the growing numbers of articles about green criteria. Alongside this, researchers are creating more and more methodologies for the selection of suppliers which contains environmental criteria. The purpose of this paper is to present and structure green criteria and to point out what a great selection of methodologies are available for the companies if they want to use not only the traditional criteria but environmentals too. Besides, in this research I present the most common motivations which can cause the introduction of green criteria in supplier evaluation. It was found that not the governmental requirements are the only motivations for companies. However, for the present, companies use mostly for green criteria the environmental management system if it is introduced at their suppliers or not and do not consider more, altough they are available in the literature. The same statement is appertain to the methodologies because it was found that companies rather than using the complex, elaborated ones, they search for the easily measurable methodologies which contains less criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A tanulmány általános áttekintést ad a beszállítófejlesztésről. A szakmában elterjedt fogalmak, a meghatározó koncepcionális keretek, az ajánlott szervezeti megoldások és az eredményes megvalósítást támogató és gátló tényezők számba vétele arra szolgál, hogy tág kontextusban értelmezhessük a lean beszállítófejlesztést. A lean beszállítófejlesztés intézményesített szervezeti rutinok halmazát jelenti – a japán autóipari nagyvállalatok gyakorlata alapján. Egyszerre van jelen az ad hoc problémamegoldás és a folyamatorientált, tág és stratégiai beszállítófejlesztés. A kulcsbeszállítók állnak a középpontban. A beszállítókkal való együttműködésben a bilaterális kapcsolatok (beszerző-beszállító) és a multilaterális kapcsolatok (beszerző és beszállítók csoportja) is fontos szereppel bírnak. A tevékenységek megvalósításában több tucat alkalmazott főállásban vesz részt. Ezek az alkalmazottak a beszerző vállalat termelési rendszerének, működési filozófiájának is szakértői. A globális nagyvállalatoknál – adott régióban is – több osztály, esetenként egymástól függetlenül is foglalkozik a beszállítók fejlesztésével. A sikeres megvalósítást és részvételt, elköteleződést támogatja, hogy az ösztönzési rutinok is formalizáltak és intézményesítettek (pl. hatalmi kérdések, elért haszon megosztása, partnerek felelőssége). _____ Our study reviews supplier development (SD) and lean supplier development literature. The first part gives an overview about general SD and related definitions, conceptual frameworks, organisational settings and supporting/impeding factors influencing the success of SD efforts. The second part of the study positions lean supplier development in this general SD context. Lean SD is built on sets of institutionalized organisational routines – based on daily practice of Japanese automotive companies. Lean SD contains of different SD approaches. It is pervaded by both ad hoc problem solving activities and process-focused, wide and strategic approaches at the same time. The efforts are focused on key suppliers. The SD is organised through bilateral (purchaser firm and supplier firm) and multilateral (purchaser firm and supplier firms) relations. At global automotive companies more dozens full time employees work on SD. These employees are expert of the production system and operating philosophy of the purchaser company. Global firms – even in a particular geographical area – have usually more departments responsible for SD. Eventually these departments work independently from each other. To achieve the expected performance improvement and real involvement and commitment it is critical to formalize and institutionalize routines influencing motivation as well (eg., power, gain sharing, responsibilities).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resource Selection (or Query Routing) is an important step in P2P IR. Though analogous to document retrieval in the sense of choosing a relevant subset of resources, resource selection methods have evolved independently from those for document retrieval. Among the reasons for such divergence is that document retrieval targets scenarios where underlying resources are semantically homogeneous, whereas peers would manage diverse content. We observe that semantic heterogeneity is mitigated in the clustered 2-tier P2P IR architecture resource selection layer by way of usage of clustering, and posit that this necessitates a re-look at the applicability of document retrieval methods for resource selection within such a framework. This paper empirically benchmarks document retrieval models against the state-of-the-art resource selection models for the problem of resource selection in the clustered P2P IR architecture, using classical IR evaluation metrics. Our benchmarking study illustrates that document retrieval models significantly outperform other methods for the task of resource selection in the clustered P2P IR architecture. This indicates that clustered P2P IR framework can exploit advancements in document retrieval methods to deliver corresponding improvements in resource selection, indicating potential convergence of these fields for the clustered P2P IR architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a case-based heuristic selection approach for automated university course and exam timetabling. The method described in this paper is motivated by the goal of developing timetabling systems that are fundamentally more general than the current state of the art. Heuristics that worked well in previous similar situations are memorized in a case base and are retrieved for solving the problem in hand. Knowledge discovery techniques are employed in two distinct scenarios. Firstly, we model the problem and the problem solving situations along with specific heuristics for those problems. Secondly, we refine the case base and discard cases which prove to be non-useful in solving new problems. Experimental results are presented and analyzed. It is shown that case based reasoning can act effectively as an intelligent approach to learn which heuristics work well for particular timetabling situations. We conclude by outlining and discussing potential research issues in this critical area of knowledge discovery for different difficult timetabling problems.