969 resultados para Supplier selection problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the problem of topology design for optical networks. We investigate the problem of selecting switching sites to minimize total cost of the optical network. The cost of an optical network can be expressed as a sum of three main factors: the site cost, the link cost, and the switch cost. To the best of our knowledge, this problem has not been studied in its general form as investigated in this paper. We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time. The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of the utility harmonic impedance based on measurements is a significant task for utility power-quality improvement and management. Compared to those well-established, accurate invasive methods, the noninvasive methods are more desirable since they work with natural variations of the loads connected to the point of common coupling (PCC), so that no intentional disturbance is needed. However, the accuracy of these methods has to be improved. In this context, this paper first points out that the critical problem of the noninvasive methods is how to select the measurements that can be used with confidence for utility harmonic impedance calculation. Then, this paper presents a new measurement technique which is based on the complex data-based least-square regression, combined with two techniques of data selection. Simulation and field test results show that the proposed noninvasive method is practical and robust so that it can be used with confidence to determine the utility harmonic impedances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although nontechnical losses automatic identification has been massively studied, the problem of selecting the most representative features in order to boost the identification accuracy and to characterize possible illegal consumers has not attracted much attention in this context. In this paper, we focus on this problem by reviewing three evolutionary-based techniques for feature selection, and we also introduce one of them in this context. The results demonstrated that selecting the most representative features can improve a lot of the classification accuracy of possible frauds in datasets composed by industrial and commercial profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intermodal rail/road freight transport constitutes an alternative to long-haul road transport for the distribution of large volumes of goods. The paper introduces the intermodal transportation problem for the tactical planning of mode and service selection. In rail mode, shippers either book train capacity on a per-unit basis or charter block trains completely. Road mode is used for short-distance haulage to intermodal terminals and for direct shipments to customers. We analyze the competition of road and intermodal transportation with regard to freight consolidation and service cost on a model basis. The approach is applied to a distribution system of an industrial company serving customers in eastern Europe. The case study investigates the impact of transport cost and consolidation on the optimal modal split.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Escherichia coli, Salmonella spp. and Acinetobacter spp. are important human pathogens. Serious infections due to these organisms are usually treated with extended-spectrum cephalosporins (ESCs). However, in the past two decades we have faced a rapid increasing of infections and colonization caused by ESC-resistant (ESC-R) isolates due to production of extended-spectrum-β-lactamases (ESBLs), plasmid-mediated AmpCs (pAmpCs) and/or carbapenemase enzymes. This situation limits drastically our therapeutic armamentarium and puts under peril the human health. Animals are considered as potential reservoirs of multidrug-resistant (MDR) Gram-negative organisms. The massive and indiscriminate use of antibiotics in veterinary medicine has contributed to the selection of ESC-R E. coli, ESC-R Salmonella spp. and, to less extent, MDR Acinetobacter spp. among animals, food, and environment. This complex scenario is responsible for the expansion of these MDR organisms which may have life-threatening clinical significance. Nowadays, the prevalence of food-producing animals carrying ESC-R E. coli and ESC-R Salmonella (especially those producing CTX-M-type ESBLs and the CMY-2 pAmpC) has reached worryingly high values. More recently, the appearance of carbapenem-resistant isolates (i.e., VIM-1-producing Enterobacteriaceae and NDM-1 or OXA-23-producing Acinetobacter spp.) in livestock has even drawn greater concerns. In this review, we describe the aspects related to the spread of the above MDR organisms among pigs, cattle, and poultry, focusing on epidemiology, molecular mechanisms of resistance, impact of antibiotic use, and strategies to contain the overall problem. The link and the impact of ESC-R organisms of livestock origin for the human scenario are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the question of which factors drive the formation of policy preferences when there are remaining uncertainties about the causes and effects of the problem at stake. To answer this question we examine policy preferences reducing aquatic micropollutants, a specific case of water protection policy and different actor groups (e.g. state, science, target groups). Here, we contrast two types of policy preferences: a) preventive or source-directed policies, which mitigate pollution in order to avoid contact with water; and b) reactive or end-of-pipe policies, which filter water already contaminated by pollutants. In a second step, we analyze the drivers for actors’ policy preferences by focusing on three sets of explanations, i.e. participation, affectedness and international collaborations. The analysis of our survey data, qualitative interviews and regression analysis of the Swiss political elite show that participation in the policy-making process leads to knowledge exchange and reduces uncertainties about the policy problem, which promotes preferences for preventive policies. Likewise, actors who are affected by the consequences of micropollutants, such as consumer or environmental associations, opt for anticipatory policies. Interestingly, we find that uncertainties about the effectiveness of preventive policies can promote preferences for end-of-pipe policies. While preventive measures often rely on (uncertain) behavioral changes of target groups, reactive policies are more reliable when it comes to fulfilling defined policy goals. Finally, we find that in a transboundary water management context, actors with international collaborations prefer policies that produce immediate and reliable outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic adaptation to different environmental conditions is expected to lead to large differences between populations at selected loci, thus providing a signature of positive selection. Whereas balancing selection can maintain polymorphisms over long evolutionary periods and even geographic scale, thus leads to low levels of divergence between populations at selected loci. However, little is known about the relative importance of these two selective forces in shaping genomic diversity, partly due to difficulties in recognizing balancing selection in species showing low levels of differentiation. Here we address this problem by studying genomic diversity in the European common vole (Microtus arvalis) presenting high levels of differentiation between populations (average FST = 0.31). We studied 3,839 Amplified Fragment Length Polymorphism (AFLP) markers genotyped in 444 individuals from 21 populations distributed across the European continent and hence over different environmental conditions. Our statistical approach to detect markers under selection is based on a Bayesian method specifically developed for AFLP markers, which treats AFLPs as a nearly codominant marker system, and therefore has increased power to detect selection. The high number of screened populations allowed us to detect the signature of balancing selection across a large geographic area. We detected 33 markers potentially under balancing selection, hence strong evidence of stabilizing selection in 21 populations across Europe. However, our analyses identified four-times more markers (138) being under positive selection, and geographical patterns suggest that some of these markers are probably associated with alpine regions, which seem to have environmental conditions that favour adaptation. We conclude that despite favourable conditions in this study for the detection of balancing selection, this evolutionary force seems to play a relatively minor role in shaping the genomic diversity of the common vole, which is more influenced by positive selection and neutral processes like drift and demographic history.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass spectrometry (MS) data provide a promising strategy for biomarker discovery. For this purpose, the detection of relevant peakbins in MS data is currently under intense research. Data from mass spectrometry are challenging to analyze because of their high dimensionality and the generally low number of samples available. To tackle this problem, the scientific community is becoming increasingly interested in applying feature subset selection techniques based on specialized machine learning algorithms. In this paper, we present a performance comparison of some metaheuristics: best first (BF), genetic algorithm (GA), scatter search (SS) and variable neighborhood search (VNS). Up to now, all the algorithms, except for GA, have been first applied to detect relevant peakbins in MS data. All these metaheuristic searches are embedded in two different filter and wrapper schemes coupled with Naive Bayes and SVM classifiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geologic storage of carbon dioxide (CO2) has been proposed as a viable means for reducing anthropogenic CO2 emissions. Once injection begins, a program for measurement, monitoring, and verification (MMV) of CO2 distribution is required in order to: a) research key features, effects and processes needed for risk assessment; b) manage the injection process; c) delineate and identify leakage risk and surface escape; d) provide early warnings of failure near the reservoir; and f) verify storage for accounting and crediting. The selection of the methodology of monitoring (characterization of site and control and verification in the post-injection phase) is influenced by economic and technological variables. Multiple Criteria Decision Making (MCDM) refers to a methodology developed for making decisions in the presence of multiple criteria. MCDM as a discipline has only a relatively short history of 40 years, and it has been closely related to advancements on computer technology. Evaluation methods and multicriteria decisions include the selection of a set of feasible alternatives, the simultaneous optimization of several objective functions, and a decision-making process and evaluation procedures that must be rational and consistent. The application of a mathematical model of decision-making will help to find the best solution, establishing the mechanisms to facilitate the management of information generated by number of disciplines of knowledge. Those problems in which decision alternatives are finite are called Discrete Multicriteria Decision problems. Such problems are most common in reality and this case scenario will be applied in solving the problem of site selection for storing CO2. Discrete MCDM is used to assess and decide on issues that by nature or design support a finite number of alternative solutions. Recently, Multicriteria Decision Analysis has been applied to hierarchy policy incentives for CCS, to assess the role of CCS, and to select potential areas which could be suitable to store. For those reasons, MCDM have been considered in the monitoring phase of CO2 storage, in order to select suitable technologies which could be techno-economical viable. In this paper, we identify techniques of gas measurements in subsurface which are currently applying in the phase of characterization (pre-injection); MCDM will help decision-makers to hierarchy the most suitable technique which fit the purpose to monitor the specific physic-chemical parameter.