969 resultados para Supplier selection problem
Resumo:
This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.
Resumo:
Rhizobium leguminosarum bv.viciae is able to establish nitrogen-fixing symbioses with legumes of the genera Pisum, Lens, Lathyrus and Vicia. Classic studies using trap plants (Laguerre et al., Young et al.) provided evidence that different plant hosts are able to select different rhizobial genotypes among those available in a given soil. However, these studies were necessarily limited by the paucity of relevant biodiversity markers. We have now reappraised this problem with the help of genomic tools. A well-characterized agricultural soil (INRA Bretennieres) was used as source of rhizobia. Plants of Pisum sativum, Lens culinaris, Vicia sativa and V. faba were used as traps. Isolates from 100 nodules were pooled, and DNA from each pool was sequenced (BGI-Hong Kong; Illumina Hiseq 2000, 500 bp PE libraries, 100 bp reads, 12 Mreads). Reads were quality filtered (FastQC, Trimmomatic), mapped against reference R. leguminosarum genomes (Bowtie2, Samtools), and visualized (IGV). An important fraction of the filtered reads were not recruited by reference genomes, suggesting that plant isolates contain genes that are not present in the reference genomes. For this study, we focused on three conserved genomic regions: 16S-23S rDNA, atpD and nodDABC, and a Single Nucleotide Polymorphism (SNP) analysis was carried out with meta / multigenomes from each plant. Although the level of polymorphism varied (lowest in the rRNA region), polymorphic sites could be identified that define the specific soil population vs. reference genomes. More importantly, a plant-specific SNP distribution was observed. This could be confirmed with many other regions extracted from the reference genomes (data not shown). Our results confirm at the genomic level previous observations regarding plant selection of specific genotypes. We expect that further, ongoing comparative studies on differential meta / multigenomic sequences will identify specific gene components of the plant-selected genotypes
Resumo:
Praying mantids use binocular cues to judge whether their prey is in striking distance. When there are several moving targets within their binocular visual field, mantids need to solve the correspondence problem. They must select between the possible pairings of retinal images in the two eyes so that they can strike at a single real target. In this study, mantids were presented with two targets in various configurations, and the resulting fixating saccades that precede the strike were analyzed. The distributions of saccades show that mantids consistently prefer one out of several possible matches. Selection is in part guided by the position and the spatiotemporal features of the target image in each eye. Selection also depends upon the binocular disparity of the images, suggesting that insects can perform local binocular computations. The pairing rules ensure that mantids tend to aim at real targets and not at “ghost” targets arising from false matches.
Resumo:
A major problem facing the effective treatment of patients with cancer is how to get the specific antitumor agent into every tumor cell. In this report we describe the use of a strategy that, by using retroviral vectors encoding a truncated human CD5 cDNA, allows the selection of only the infected cells, and we show the ability to obtain, before bone marrow transplantation, a population of 5-fluouraci-treated murine bone marrow cells that are 100% marked. This marked population of bone marrow cells is able to reconstitute the hematopoietic system in lethally irradiated mice, indicating that the surface marker lacks deleterious effects on the functionality of bone marrow cells. No gross abnormalities in hematopoiesis were detected in mice repopulated with CD5-expressing cells. Nevertheless, a significant proportion of the hematopoietic cells no longer expresses the surface marker CD5 in the 9-month-old recipient mice. This transcriptional inactivity of the proviral long terminal repeat (LTR) was accompanied by de novo methylation of the proviral sequences. Our results show that the use of the CD5 as a retrovirally encoded marker enables the rapid, efficient, and nontoxic selection in vitro of infected primary cells, which can entirely reconstitute the hematopoietic system in mice. These results should now greatly enhance the power of studies aimed at addressing questions such as generation of cancer-negative hematopoiesis.
Resumo:
The gene transfer efficiency of human hematopoietic stem cells is still inadequate for efficient gene therapy of most disorders. To overcome this problem, a selectable retroviral vector system for gene therapy has been developed for gene therapy of Gaucher disease. We constructed a bicistronic retroviral vector containing the human glucocerebrosidase (GC) cDNA and the human small cell surface antigen CD24 (243 bp). Expression of both cDNAs was controlled by the long terminal repeat enhancer/promoter of the Molony murine leukemia virus. The CD24 selectable marker was placed downstream of the GC cDNA and its translation was enhanced by inclusion of the long 5' untranslated region of encephalomyocarditis virus internal ribosomal entry site. Virus-producing GP+envAM12 cells were created by multiple supernatant transductions to create vector producer cells. The vector LGEC has a high titer and can drive expression of GC and the cell surface antigen CD24 simultaneously in transduced NIH 3T3 cells and Gaucher skin fibroblasts. These transduced cells have been successfully separated from untransduced cells by fluorescence-activated cell sorting, based on cell surface expression of CD24. Transduced and sorted NIH 3T3 cells showed higher GC enzyme activity than the unsorted population, demonstrating coordinated expression of both genes. Fibroblasts from Gaucher patients were transduced and sorted for CD24 expression, and GC enzyme activity was measured. The transduced sorted Gaucher fibroblasts had a marked increase in enzyme activity (149%) compared with virgin Gaucher fibroblasts (17% of normal GC enzyme activity). Efficient transduction of CD34+ hematopoietic progenitors (20-40%) was accomplished and fluorescence-activated cell sorted CD24(+)-expressing progenitors generated colonies, all of which (100%) were vector positive. The sorted, CD24-expressing progenitors generated erythroid burst-forming units, colony-forming units (CFU)-granulocyte, CFU-macrophage, CFU-granulocyte/macrophage, and CFU-mix hematopoietic colonies, demonstrating their ability to differentiate into these myeloid lineages in vitro. The transduced, sorted progenitors raised the GC enzyme levels in their progeny cells manyfold compared with untransduced CD34+ progenitors. Collectively, this demonstrates the development of high titer, selectable bicistronic vectors that allow isolation of transduced hematopoietic progenitors and cells that have been metabolically corrected.
Resumo:
Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.
Resumo:
The emergence of antibiotic resistance among pathogenic and commensal bacteria has become a serious problem worldwide. The use and overuse of antibiotics in a number of settings are contributing to the development of antibiotic-resistant microorganisms. The class 1 and 2 integrase genes (intI1 and intI2, respectively) were identified in mixed bacterial cultures enriched from bovine feces by growth in buffered peptone water (BPW) followed by integrase-specific PCR. Integrase-positive bacterial colonies from the enrichment cultures were then isolated by using hydrophobic grid membrane filters and integrase-specific gene probes. Bacterial clones isolated by this technique were then confirmed to carry integrons by further testing by PCR and DNA sequencing. Integron-associated antibiotic resistance genes were detected in bacteria such as Escherichia coli, Aeromonas spp., Proteus spp., Morganella morganii, Shewanella spp., and urea-positive Providencia stuartii isolates from bovine fecal samples without the use of selective enrichment media containing antibiotics. Streptomycin and trimethoprim resistance were commonly associated with integrons. The advantages conferred by this methodology are that a wide variety of integron-containing bacteria may be simultaneously cultured in BPW enrichments and culture biases due to antibiotic selection can be avoided. Rapid and efficient identification, isolation, and characterization of antibiotic resistance-associated integrons are possible by this protocol. These methods will facilitate greater understanding of the factors that contribute to the presence and transfer of integron-associated antibiotic resistance genes in bacterial isolates from red meat production animals.
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
Purpose – Developing countries are heavily dependent on the resources and commitment of foreign providers to ensure successful adoption of advanced manufacturing technology (AMT). The purpose of this paper is to describe the important role of buyer-supplier relationships (BSRs) in the process of technology selection, acquisition and implementation. Design/methodology/approach – A survey of 147 Malaysian manufacturing firms is the main instrument used in the research investigations and data analysis is carried out by the structured equation modelling (SEM) technique. In particular, the authors examine the impact on performance of different patterns of relationship between technology buyers and suppliers. Findings – Although the majority of the firms reported improvements in their performance since the acquisition of AMT, closer investigation reveals that those demonstrating a closer relationship with their suppliers are more likely to achieve higher levels of technology and implementation performance (IP) than those that do not. Research limitations/implications – The paper only assesses the strength of BSR from the buyers' perspective and they may have limited experience of acquisition, whereas suppliers may have more experience of selling AMT. Also, the research is undertaken in Malaysia and the findings may be different in other countries, especially where the technology being acquired is not imported but sourced locally. Practical implications – The findings relating to BSR, technology acquisition and IP have important implications both for customers and supplier firms as well as for industrial policy makers in developing countries. Originality/value – The result of the research provides useful insights that are especially pertinent to an improved understanding of BSRs in the procurement of capital equipment, about which the current research literature is limited.
Resumo:
The research was carried out in the Aviation Division of Dunlop Limited and was initiated as a search for more diverse uses for carbon/carbon composites. An assumed communication model of adoption was refined by introducing the concept of a two way search after making cross industry comparisons of supplier and consumer behaviour. This research has examined methods of searching for new uses for advanced technology materials. Two broad approaches were adopted. First, a case history approach investigated materials that had been in a similar oosition to carbon/carbon to see how other material producing firms had tackled the problem. Second, a questionnaire survey among industrialists examined: the role and identity of material decision makers in different sized firms; the effectiveness of various information sources and channels; and the material adoption habits of different industries. The effectiveness of selected information channels was further studied by monitoring the response to publicity given to carbon/carbon. A flow chart has been developed from the results of this research which should help any material producing firm that is contemplating the introduction of a new material to the world market. Further benefit to our understanding of the innovation and adoption of new materials would accrue from work in the followino areas: "micro" type case histories; understanding more fully the role of product champions or promoters; investigating the phase difference between incremental and radical type innovations for materials; examining the relationship between the adoption rate of new materials and the advance of technology; studying the development of cost per unit function methods for material selection; and reviewing the benefits that economy of scale studies can have on material developments. These are all suggested areas for further work.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
We address the important bioinformatics problem of predicting protein function from a protein's primary sequence. We consider the functional classification of G-Protein-Coupled Receptors (GPCRs), whose functions are specified in a class hierarchy. We tackle this task using a novel top-down hierarchical classification system where, for each node in the class hierarchy, the predictor attributes to be used in that node and the classifier to be applied to the selected attributes are chosen in a data-driven manner. Compared with a previous hierarchical classification system selecting classifiers only, our new system significantly reduced processing time without significantly sacrificing predictive accuracy.
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.