969 resultados para Supplier selection problem


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional heuristic approaches to the Examination Timetabling Problem normally utilize a stochastic method during Optimization for the selection of the next examination to be considered for timetabling within the neighbourhood search process. This paper presents a technique whereby the stochastic method has been augmented with information from a weighted list gathered during the initial adaptive construction phase, with the purpose of intelligently directing examination selection. In addition, a Reinforcement Learning technique has been adapted to identify the most effective portions of the weighted list in terms of facilitating the greatest potential for overall solution improvement. The technique is tested against the 2007 International Timetabling Competition datasets with solutions generated within a time frame specified by the competition organizers. The results generated are better than those of the competition winner in seven of the twelve examinations, while being competitive for the remaining five examinations. This paper also shows experimentally how using reinforcement learning has improved upon our previous technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Simple Multi-Attribute Rating Technique (SMART). The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Questa tesi ha l’obbiettivo di studiare e seguire la creazione un modello matematico che possa risolvere un problema logistico di Hub Facility Location reale, per l’individuazione del posizionamento ottimale di uno o più depositi all’interno di una rete distributiva europea e per l’assegnazione dei rispettivi clienti. Si fa riferimento alla progettazione della rete logistica per rispondere alle necessità del cliente, relativamente ad una domanda multiprodotto. Questo problema è stato studiato a partire da un caso reale aziendale per la valutazione della convenienza nella sostituzione di quattro magazzini locali con uno/due hub logistici che possano servire tutte le aree. Il modello distributivo può anche essere adoperato per valutare l’effetto della variazione, dal punto di vista economico, del servizio di trasporto e di tariffario. La determinazione della posizione ottimale e del numero dei magazzini avviene tramite un modello matematico che considera al proprio interno sia costi fissi relativi alla gestione dei magazzini (quindi costo di stabilimento, personale e giacenza) e sia i costi relativi al trasporto e alla spedizione dei prodotti sulle diverse aree geografiche. In particolare, la formulazione matematica si fonda su un modello Programmazione Lineare Intera, risolto in tempi molto brevi attraverso un software di ottimizzazione, nonostante la grande mole di dati in input del problema. In particolare, si ha lo studio per l’integrazione di tariffari di trasporto diversi e delle economie di scala per dare consistenza ad un modello teorico. Inoltre, per ricercare la migliore soluzione di quelle ottenute sono poi emersi altri fattori oltre a quello economico, ad esempio il tempo di trasporto (transit-time) che è un fattore chiave per ottenere la soddisfazione e la fedeltà del cliente e attitudine dell’area geografica ad accogliere una piattaforma logistica, con un occhio sugli sviluppi futuri.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The received view of an ad hoc hypothesis is that it accounts for only the observation(s) it was designed to account for, and so non-adhocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be unsuccessful or of doubtful value, and familiar and firmer criteria for evaluating the hypotheses or modified theories so classified are characteristically available. These points are obscured largely because the received view fails to adequately separate psychology from methodology or to recognise ambiguities in the use of 'ad hoc'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we refer to the gene-to-phenotype modeling challenge as the GP problem. Integrating information across levels of organization within a genotype-environment system is a major challenge in computational biology. However, resolving the GP problem is a fundamental requirement if we are to understand and predict phenotypes given knowledge of the genome and model dynamic properties of biological systems. Organisms are consequences of this integration, and it is a major property of biological systems that underlies the responses we observe. We discuss the E(NK) model as a framework for investigation of the GP problem and the prediction of system properties at different levels of organization. We apply this quantitative framework to an investigation of the processes involved in genetic improvement of plants for agriculture. In our analysis, N genes determine the genetic variation for a set of traits that are responsible for plant adaptation to E environment-types within a target population of environments. The N genes can interact in epistatic NK gene-networks through the way that they influence plant growth and development processes within a dynamic crop growth model. We use a sorghum crop growth model, available within the APSIM agricultural production systems simulation model, to integrate the gene-environment interactions that occur during growth and development and to predict genotype-to-phenotype relationships for a given E(NK) model. Directional selection is then applied to the population of genotypes, based on their predicted phenotypes, to simulate the dynamic aspects of genetic improvement by a plant-breeding program. The outcomes of the simulated breeding are evaluated across cycles of selection in terms of the changes in allele frequencies for the N genes and the genotypic and phenotypic values of the populations of genotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O problema de selecção de fornecedores/parceiros é uma parte integrante e importante nas empresas que se propõem a um desempenho competitivo e lucrativo na sua área de actividade. A escolha do melhor fornecedor/parceiro passa na maior parte da vezes por fazer uma análise cuidada dos factores que podem influenciar positiva ou negativamente essa escolha. Desde cedo este problema tem vindo a ser alvo de inúmeros estudos, estudos esses que se focam essencialmente nos critérios a considerar e nas metodologias a adoptar para optimizar a escolha dos parceiros. De entre os vários estudos efectuados, muitos são os que consideram como critérios chave o custo do produto, a qualidade, a entrega e a reputação da empresa fornecedora. Ainda assim, há muitos outros que são referidos e que na sua maioria se apresentam como subcritérios. No âmbito deste trabalho, foram identificados cinco grandes critérios, Qualidade, Sistema Financeiro, Sinergias, Custo e Sistema Produtivo. Dentro desses critérios, sentiu-se a necessidade de incluir alguns subcritérios pelo que, cada um dos critérios chave apresenta cinco subcritérios. Identificados os critérios, foi necessário perceber de que forma são aplicados e que modelos são utilizados para se poder tirar o melhor partido das informações. Sabendo que existem modelos que privilegiam a programação matemática e outros que fazem uso de ponderações lineares para se identificar o melhor fornecedor, foi realizado um inquérito e contactadas empresas por forma a perceber quais os factores que mais peso tinham nas suas decisões de escolha de parceiros. Interpretados os resultados e tratados os dados foi adoptado um modelo de ponderação linear para traduzir a importância de cada um dos factores. O modelo proposto apresenta uma estrutura hierárquica e pode ser aplicado com o método AHP de Saaty ou o método de Análise de Valor. Este modelo permite escolher a ou as alternativas que melhor se adequam aos requisitos das empresas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chrysonilia sitophila is a common mould in cork industry and has been identified as a cause of IgE sensitization and occupational asthma. This fungal species have a fast growth rate that may inhibit others species’ growth causing underestimated data from characterization of occupational fungal exposure. Aiming to ascertain occupational exposure to fungi in cork industry, were analyzed papers from 2000 about the best air sampling method, to obtain quantification and identification of all airborne culturable fungi, besides the ones that have fast-growing rates. Impaction method don’t allows the collection of a representative air volume, because even with some media that restricts the growth of the colonies, in environments with higher fungal load, such as cork industry, the counting of the colonies is very difficult. Otherwise, impinger method permits the collection of a representative air volume, since we can make dilution of the collected volume. Besides culture methods that allows fungal identification trough macro- and micro-morphology, growth features, thermotolerance and ecological data, we can apply molecular biology with the impinger method, to detect the presence of non-viable particles and potential mycotoxin producers’ strains, and also to detect mycotoxins presence with ELISA or HPLC. Selection of the best air sampling method in each setting is crucial to achieve characterization of occupational exposure to fungi. Information about the prevalent fungal species in each setting and also the eventual fungal load it’s needed for a criterious selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resource constraints are becoming a problem as many of the wireless mobile devices have increased generality. Our work tries to address this growing demand on resources and performance, by proposing the dynamic selection of neighbor nodes for cooperative service execution. This selection is in uenced by user's quality of service requirements expressed in his request, tailoring provided service to user's speci c needs. In this paper we improve our proposal's formulation algorithm with the ability to trade o time for the quality of the solution. At any given time, a complete solution for service execution exists, and the quality of that solution is expected to improve overtime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.