975 resultados para Combinatorial auctions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En entornos donde los recursos son precederos y la asignación de recursos se repite en el tiempo con el mismo conjunto o un conjunto muy similar de agentes, las subastas recurrentes pueden ser utilizadas. Una subasta recurrente es una secuencia de subastas donde el resultado de una subasta puede influenciar en las siguientes. De todas formas, este tipo de subastas tienen problemas particulares cuando la riqueza de los agentes esta desequilibrada y los recursos son precederos. En esta tesis se proponen algunos mecanismos justos o equitativos para minimizar los efectos de estos problemas. En una subasta recurrente una solución justa significa que todos los participantes consiguen a largo plazo sus objetivos en el mismo grado o en el grado más parecido posible, independientemente de su riqueza. Hemos demostrado experimentalmente que la inclusión de justicia incentiva a los bidders en permanecer en la subasta minimizando los problemas de las subastas recurrentes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combinatorial auction mechanisms have been used in many applications such as resource and task allocation, planning and time scheduling in multi-agent systems, in which the items to be allocated are complementary or substitutable. The winner determination in combinatorial auction itself is a NP-complete problem, and has attracted many attentions of researchers world wide. Some outstanding achievements have been made including CPLEX and CABOB algorithms on this topic. To our knowledge, the research into multi-unit combinatorial auctions with reserve prices considered is more or less ignored. To this end, we present a new algorithm for multi-unit combinatorial auctions with reserve prices, which is based on Sandholm's work. An efficient heuristic function is developed for the new algorithm. Experiments have been conducted. The experimental results show that auctioneer agent can find the optimal solution efficiently for a reasonable problem scale with our algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The procurement of transportation services via large-scale combinatorial auctions involves a couple of complex decisions whose outcome highly influences the performance of the tender process. This paper examines the shipper's task of selecting a subset of the submitted bids which efficiently trades off total procurement cost against expected carrier performance. To solve this bi-objective winner determination problem, we propose a Pareto-based greedy randomized adaptive search procedure (GRASP). As a post-optimizer we use a path relinking procedure which is hybridized with branch-and-bound. Several variants of this algorithm are evaluated by means of artificial test instances which comply with important real-world characteristics. The two best variants prove superior to a previously published Pareto-based evolutionary algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work focuses on obtaining truthful mechanisms that aim at maximizing both the revenue and the economic efficiency (social welfare) for the unitdemand combinatorial auction problem (UDCAP), in which a set of k items is auctioned to a set of n consumers. Although each consumer bids on all items, no consumer can purchase more than one item in the UDCAP. We present a framework for devising poly-time randomized competitive truthful mechanisms that can be used to either favor economic efficiency or revenue.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Das operative Torbelegungsproblem (TBP) z. B. an einem Distributions- oder Cross-dockingzentrum ist ein logistisches Problem, bei dem es gilt, an- und abfahrende Fahrzeuge zeitlich und räumlich so auf die Warenein- und -ausgangstore zu verteilen, dass eine mög-lichst kostengünstige Abfertigung ermöglicht wird. Bisherige Arbeiten, die sich mit dem TBP beschäftigen, lassen Aspekte der Kooperation außer Acht. Dieser Beitrag stellt ein Verfahren vor, durch das der Nachteil einseitig optimaler Torbelegungen überwunden werden kann. Dabei wird auf das Mittel der kombinatorischen Auktionen zurückgegriffen und das TBP als Allokationsproblem modelliert, bei dem Frachtführer um Bündel konsekutiver Einheitszeit-intervalle an den Toren konkurrieren. Mittels eines Vickrey-Clarke-Groves-Mechanismus wird einerseits die Anreizkompatibilität, andererseits die individuelle Rationalität des Auk-tionsverfahrens sichergestellt. Das Verfahren wurde in ILOG OPL Studio 3.6.1 implemen-tiert und die durch Testdaten gewonnenen Ergebnisse zeigen, dass die Laufzeiten gering genug sind, um das Verfahren für die operative (kurzfristige) Planung einzusetzen und so transportlogistische Prozesse für alle Beteiligten wirtschaftlicher zu gestalten.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel m-ary tree based approach is presented to solve asset management decisions which are combinatorial in nature. The approach introduces a new dynamic constraint based control mechanism which is capable of excluding infeasible solutions from the solution space. The approach also provides a solution to the challenges with ordering of assets decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An array of substrates link the tryptic serine protease, kallikrein-related peptidase 14 (KLK14), to physiological functions including desquamation and activation of signaling molecules associated with inflammation and cancer. Recognition of protease cleavage sequences is driven by complementarity between exposed substrate motifs and the physicochemical signature of an enzyme's active site cleft. However, conventional substrate screening methods have generated conflicting subsite profiles for KLK14. This study utilizes a recently developed screening technique, the sparse matrix library, to identify five novel high-efficiency sequences for KLK14. The optimal sequence, YASR, was cleaved with higher efficiency (k(cat)/K(m)=3.81 ± 0.4 × 10(6) M(-1) s(-1)) than favored substrates from positional scanning and phage display by 2- and 10-fold, respectively. Binding site cooperativity was prominent among preferred sequences, which enabled optimal interaction at all subsites as indicated by predictive modeling of KLK14/substrate complexes. These simulations constitute the first molecular dynamics analysis of KLK14 and offer a structural rationale for the divergent subsite preferences evident between KLK14 and closely related KLKs, KLK4 and KLK5. Collectively, these findings highlight the importance of binding site cooperativity in protease substrate recognition, which has implications for discovery of optimal substrates and engineering highly effective protease inhibitors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Secure communications in wireless sensor networks operating under adversarial conditions require providing pairwise (symmetric) keys to sensor nodes. In large scale deployment scenarios, there is no prior knowledge of post deployment network configuration since nodes may be randomly scattered over a hostile territory. Thus, shared keys must be distributed before deployment to provide each node a key-chain. For large sensor networks it is infeasible to store a unique key for all other nodes in the key-chain of a sensor node. Consequently, for secure communication either two nodes have a key in common in their key-chains and they have a wireless link between them, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path have a key in common. Length of the key-path is the key factor for efficiency of the design. This paper presents novel deterministic and hybrid approaches based on Combinatorial Design for deciding how many and which keys to assign to each key-chain before the sensor network deployment. In particular, Balanced Incomplete Block Designs (BIBD) and Generalized Quadrangles (GQ) are mapped to obtain efficient key distribution schemes. Performance and security properties of the proposed schemes are studied both analytically and computationally. Comparison to related work shows that the combinatorial approach produces better connectivity with smaller key-chain sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A set system (X, F ) with X= {x 1,...,x m}) and F = {B1...,B n }, where B i ⊆ X, is called an (n, m) cover-free set system (or CF set system) if for any 1 ≤ i, j, k ≤ n and j ≠ k, |B i >2 |B j ∩ B k | +1. In this paper, we show that CF set systems can be used to construct anonymous membership broadcast schemes (or AMB schemes), allowing a center to broadcast a secret identity among a set of users in a such way that the users can verify whether or not the broadcast message contains their valid identity. Our goal is to construct (n, m) CF set systems in which for given m the value n is as large as possible. We give two constructions for CF set systems, the first one from error-correcting codes and the other from combinatorial designs. We link CF set systems to the concept of cover-free family studied by Erdös et al in early 80’s to derive bounds on parameters of CF set systems. We also discuss some possible extensions of the current work, motivated by different application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an integral approach to the study of plasma-surface interactions during the catalytic growth of selected nanostructures (NSs). This approach involves basic understanding of the plasma-specific effects in NS nucleation and growth, theoretical modelling, numerical simulations, plasma diagnostics, and surface microanalysis. Using an example of plasma-assisted growth of surface-supported single-walled carbon nanotubes, we discuss how the combination of these techniques may help improve the outcomes of the growth process. A specific focus here is on the effects of nanoscale plasma-surface interactions on the NS growth and how the available techniques may be used, both in situ and ex situ to optimize the growth process and structural parameters of NSs.