954 resultados para Suppliers selection problem


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we develop a game theoretic approach for clustering features in a learning problem. Feature clustering can serve as an important preprocessing step in many problems such as feature selection, dimensionality reduction, etc. In this approach, we view features as rational players of a coalitional game where they form coalitions (or clusters) among themselves in order to maximize their individual payoffs. We show how Nash Stable Partition (NSP), a well known concept in the coalitional game theory, provides a natural way of clustering features. Through this approach, one can obtain some desirable properties of the clusters by choosing appropriate payoff functions. For a small number of features, the NSP based clustering can be found by solving an integer linear program (ILP). However, for large number of features, the ILP based approach does not scale well and hence we propose a hierarchical approach. Interestingly, a key result that we prove on the equivalence between a k-size NSP of a coalitional game and minimum k-cut of an appropriately constructed graph comes in handy for large scale problems. In this paper, we use feature selection problem (in a classification setting) as a running example to illustrate our approach. We conduct experiments to illustrate the efficacy of our approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In geographical forwarding of packets in a large wireless sensor network (WSN) with sleep-wake cycling nodes, we are interested in the local decision problem faced by a node that has ``custody'' of a packet and has to choose one among a set of next-hop relay nodes to forward the packet toward the sink. Each relay is associated with a ``reward'' that summarizes the benefit of forwarding the packet through that relay. We seek a solution to this local problem, the idea being that such a solution, if adopted by every node, could provide a reasonable heuristic for the end-to-end forwarding problem. Toward this end, we propose a local relay selection problem consisting of a forwarding node and a collection of relay nodes, with the relays waking up sequentially at random times. At each relay wake-up instant, the forwarder can choose to probe a relay to learn its reward value, based on which the forwarder can then decide whether to stop (and forward its packet to the chosen relay) or to continue to wait for further relays to wake up. The forwarder's objective is to select a relay so as to minimize a combination of waiting delay, reward, and probing cost. The local decision problem can be considered as a variant of the asset selling problem studied in the operations research literature. We formulate the local problem as a Markov decision process (MDP) and characterize the solution in terms of stopping sets and probing sets. We provide results illustrating the structure of the stopping sets, namely, the (lower bound) threshold and the stage independence properties. Regarding the probing sets, we make an interesting conjecture that these sets are characterized by upper bounds. Through simulation experiments, we provide valuable insights into the performance of the optimal local forwarding and its use as an end-to-end forwarding heuristic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider the variable selection problem for a nonlinear non-parametric system. Two approaches are proposed, one top-down approach and one bottom-up approach. The top-down algorithm selects a variable by detecting if the corresponding partial derivative is zero or not at the point of interest. The algorithm is shown to have not only the parameter but also the set convergence. This is critical because the variable selection problem is binary, a variable is either selected or not selected. The bottom-up approach is based on the forward/backward stepwise selection which is designed to work if the data length is limited. Both approaches determine the most important variables locally and allow the unknown non-parametric nonlinear system to have different local dimensions at different points of interest. Further, two potential applications along with numerical simulations are provided to illustrate the usefulness of the proposed algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the gene selection problem for microarray data with small samples and variant correlation. Most existing algorithms usually require expensive computational effort, especially under thousands of gene conditions. The main objective of this paper is to effectively select the most informative genes from microarray data, while making the computational expenses affordable. This is achieved by proposing a novel forward gene selection algorithm (FGSA). To overcome the small samples' problem, the augmented data technique is firstly employed to produce an augmented data set. Taking inspiration from other gene selection methods, the L2-norm penalty is then introduced into the recently proposed fast regression algorithm to achieve the group selection ability. Finally, by defining a proper regression context, the proposed method can be fast implemented in the software, which significantly reduces computational burden. Both computational complexity analysis and simulation results confirm the effectiveness of the proposed algorithm in comparison with other approaches

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A key problem in object recognition is selection, namely, the problem of identifying regions in an image within which to start the recognition process, ideally by isolating regions that are likely to come from a single object. Such a selection mechanism has been found to be crucial in reducing the combinatorial search involved in the matching stage of object recognition. Even though selection is of help in recognition, it has largely remained unsolved because of the difficulty in isolating regions belonging to objects under complex imaging conditions involving occlusions, changing illumination, and object appearances. This thesis presents a novel approach to the selection problem by proposing a computational model of visual attentional selection as a paradigm for selection in recognition. In particular, it proposes two modes of attentional selection, namely, attracted and pay attention modes as being appropriate for data and model-driven selection in recognition. An implementation of this model has led to new ways of extracting color, texture and line group information in images, and their subsequent use in isolating areas of the scene likely to contain the model object. Among the specific results in this thesis are: a method of specifying color by perceptual color categories for fast color region segmentation and color-based localization of objects, and a result showing that the recognition of texture patterns on model objects is possible under changes in orientation and occlusions without detailed segmentation. The thesis also presents an evaluation of the proposed model by integrating with a 3D from 2D object recognition system and recording the improvement in performance. These results indicate that attentional selection can significantly overcome the computational bottleneck in object recognition, both due to a reduction in the number of features, and due to a reduction in the number of matches during recognition using the information derived during selection. Finally, these studies have revealed a surprising use of selection, namely, in the partial solution of the pose of a 3D object.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Life-cycle assessment (LCA) is a method for evaluating the environmental impacts of products holistically, including direct and supply chain impacts. The current LCA methodologies and the standards by the International Organization for Standardization (ISO) impose practical difficulties for drawing system boundaries; decisions on inclusion or exclusion of processes in an analysis (the cutoff criteria) are typically not made on a scientific basis. In particular, the requirement of deciding which processes could be excluded from the inventory can be rather difficult to meet because many excluded processes have often never been assessed by the practitioner, and therefore, their negligibility cannot be guaranteed. LCA studies utilizing economic input−output analysis have shown that, in practice, excluded processes can contribute as much to the product system under study as included processes; thus, the subjective determination of the system boundary may lead to invalid results. System boundaries in LCA are discussed herein with particular attention to outlining hybrid approaches as methods for resolving the boundary selection problem in LCA. An input−output model can be used to describe at least a part of a product system, and an ISO-compatible system boundary selection procedure can be designed by applying hybrid input−output-assisted approaches. There are several hybrid input−output analysis-based LCA methods that can be implemented in practice for broadening system boundary and also for ISO compliance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses a resource selection problem for applications that update data in enterprise grid systems. The problem is insufficiently addressed as most of the existing resource selection approaches in grid environments primarily deal with read-only job. We propose a simple yet efficient algorithm that deals with the complexity of resource selection problem in enterprise grid systems. The problem is formulated as a Multi Criteria Decision Making (MCDM) problem. Our proposed algorithm hides the complexity of resource selection process without neglecting important components that affect job response time. The difficulty on estimating job response time is captured by representing them in terms of different QoS criteria levels at each resource. Our experiments show that the proposed algorithm achieves very good results with good system performance as compared to existing algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We construct a dynamic equilibrium model to quantitatively study sovereign debt with contingent services and country risk spreads such that the benefits of defaulting are tempered by higher interest rates in the future. For a wide range of parameters, the only equilibrium of the model is one in which the sovereign defaults in all states, unless defaulting incurs additional costs. Due to the adverse selection problem, some countries choose to delay default in order to reduce reputation loss. Although equilibria with no default imply in greater welfare levels, they are not sustainable in the highly indebted and volatile countries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Global competition requires that the companies adapt themselves to technological changes rapidly, develop new products, reduce the cost, shorten the time to market, and increase the quality. In this context, supplier involvement in New Product Development (NPD) is determinant for a company to respond to the requirements of the increasingly dynamic markets. The main purpose of the paper is to demonstrate the importance of supplier involvement in NPD, buyer-supplier relationships and their effects on buyer’s NPD process, highlighting the benefits of supplier involvement, the barriers, the strategic aspects and industry aspects. These issues are addressed with a case study from the semiconductor industry. Besides helping to understand NPD in the semiconductor industry, the contribution and fi ndings of this work are clear: the results achieved confirm the findings of studies referred in the literature review, and confirm that the semiconductor industry sector requires a closer and more complex relationship structure with suppliers, given the specificities and challenges of the sector, such as rapid technological changes, permanent innovation, global competition, reduction of cost and time-to-market cycle, increased capacity, among other. The main contribution of the paper to the scientific literature and to managers is the better understanding of the buyer-supplier relationships in NPD in the semiconductor industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.