127 resultados para Constitutional guarantees


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance guarantees for online learning algorithms typically take the form of regret bounds, which express that the cumulative loss overhead compared to the best expert in hindsight is small. In the common case of large but structured expert sets we typically wish to keep the regret especially small compared to simple experts, at the cost of modest additional overhead compared to more complex others. We study which such regret trade-offs can be achieved, and how. We analyse regret w.r.t. each individual expert as a multi-objective criterion in the simple but fundamental case of absolute loss. We characterise the achievable and Pareto optimal trade-offs, and the corresponding optimal strategies for each sample size both exactly for each finite horizon and asymptotically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that guarantees constant regret in the stochastic setting, but has poor performance for worst-case data. Other hedging strategies have better worst-case guarantees but may perform much worse than FTL if the data are not maximally adversarial. We introduce the FlipFlop algorithm, which is the first method that provably combines the best of both worlds. As a stepping stone for our analysis, we develop AdaHedge, which is a new way of dynamically tuning the learning rate in Hedge without using the doubling trick. AdaHedge refines a method by Cesa-Bianchi, Mansour, and Stoltz (2007), yielding improved worst-case guarantees. By interleaving AdaHedge and FTL, FlipFlop achieves regret within a constant factor of the FTL regret, without sacrificing AdaHedge’s worst-case guarantees. AdaHedge and FlipFlop do not need to know the range of the losses in advance; moreover, unlike earlier methods, both have the intuitive property that the issued weights are invariant under rescaling and translation of the losses. The losses are also allowed to be negative, in which case they may be interpreted as gains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A secure protocol for electronic, sealed-bid, single item auctions is presented. The protocol caters to both first and second price (Vickrey) auctions and provides full price flexibility. Both computational and communication cost are linear with the number of bidders and utilize only standard cryptographic primitives. The protocol strictly divides knowledge of the bidder's identity and their actual bids between, respectively, a registration authority and an auctioneer, who are assumed not to collude but may be separately corrupt. This assures strong bidder-anonymity, though only weak bid privacy. The protocol is structured in two phases, each involving only off-line communication. Registration, requiring the use of the public key infrastructure, is simultaneous with hash-sealed bid-commitment and generates a receipt to the bidder containing a pseudonym. This phase is followed by encrypted bid-submission. Both phases involve the registration authority acting as a communication conduit but the actual message size is quite small. It is argued that this structure guarantees non-repudiation by both the winner and the auctioneer. Second price correctness is enforced either by observing the absence of registration of the claimed second-price bid or, where registered but lower than the actual second price, is subject to cooperation by the second price bidder - presumably motivated through self-interest. The use of the registration authority in other contexts is also considered with a view to developing an architecture for efficient secure multiparty transactions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A parallel authentication and public-key encryption is introduced and exemplified on joint encryption and signing which compares favorably with sequential Encrypt-then-Sign (ɛtS) or Sign-then-Encrypt (Stɛ) schemes as far as both efficiency and security are concerned. A security model for signcryption, and thus joint encryption and signing, has been recently defined which considers possible attacks and security goals. Such a scheme is considered secure if the encryption part guarantees indistinguishability and the signature part prevents existential forgeries, for outsider but also insider adversaries. We propose two schemes of parallel signcryption, which are efficient alternative to Commit-then-Sign-and- Encrypt (Ct&G3&S). They are both provably secure in the random oracle model. The first one, called generic parallel encrypt and sign, is secure if the encryption scheme is semantically secure against chosen-ciphertext attacks and the signature scheme prevents existential forgeries against random-message attacks. The second scheme, called optimal parallel encrypt. and sign, applies random oracles similar to the OAEP technique in order to achieve security using encryption and signature components with very weak security requirements — encryption is expected to be one-way under chosen-plaintext attacks while signature needs to be secure against universal forgeries under random-plaintext attack, that is actually the case for both the plain-RSA encryption and signature under the usual RSA assumption. Both proposals are generic in the sense that any suitable encryption and signature schemes (i.e. which simply achieve required security) can be used. Furthermore they allow both parallel encryption and signing, as well as parallel decryption and verification. Properties of parallel encrypt and sign schemes are considered and a new security standard for parallel signcryption is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book analyses the principles underlying the construction and application of a number of boilerplate and other clauses commonly included in commercial contracts. The first Part of the work deals with general principles of interpretation. It then considers clauses which allocate commercial risk; clauses relating to performance; clauses introducing new parties by way of assignment, novation or nomination; clauses such as guarantees and indemnities which create liabilities in third parties; and dispute resolution clauses including governing law. The authors highlight common issues surrounding the application of these clauses in practice and, where appropriate, make drafting recommendations based on their analysis of case law and the operation of relevant statutes. This is a very accessible resource for all commercial practitioners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: About 1-5% of cancer patients suffer from significant normal tissue reactions as a result of radiotherapy (RT). It is not possible at this time to predict how most patients' normal tissues will respond to RT. DNA repair dysfunction is implicated in sensitivity to RT particularly in genes that mediate the repair of DNA double-strand breaks (DSBs). Phosphorylation of histone H2AX (phosphorylated molecules are known as gammaH2AX) occurs rapidly in response to DNA DSBs, and, among its other roles, contributes to repair protein recruitment to these damaged sites. Mammalian cell lines have also been crucial in facilitating the successful cloning of many DNA DSB repair genes; yet, very few mutant cell lines exist for non-syndromic clinical radiosensitivity (RS). METHODS: Here, we survey DNA DSB induction and repair in whole cells from RS patients, as revealed by gammaH2AX foci assays, as potential predictive markers of clinical radiation response. RESULTS: With one exception, both DNA focus induction and repair in cell lines from RS patients were comparable with controls. Using gammaH2AX foci assays, we identified a RS cancer patient cell line with a novel ionising radiation-induced DNA DSB repair defect; these data were confirmed by an independent DNA DSB repair assay. CONCLUSION: gammaH2AX focus measurement has limited scope as a pre-RT predictive assay in lymphoblast cell lines from RT patients; however, the assay can successfully identify novel DNA DSB repair-defective patient cell lines, thus potentially facilitating the discovery of novel constitutional contributions to clinical RS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Kyoto Protocol is remarkable among global multilateral environmental agreements for its efforts to depoliticize compliance. However, attempts to create autonomous, arm’s length and rule-based compliance processes with extensive reliance on putatively neutral experts were only partially realized in practice in the first commitment period from 2008 to 2012. In particular, the procedurally constrained facilitative powers vested in the Facilitative Branch were circumvented, and expert review teams (ERTs) assumed pivotal roles in compliance facilitation. The ad hoc diplomatic and facilitative practices engaged in by these small teams of technical experts raise questions about the reliability and consistency of the compliance process. For the future operation of the Kyoto compliance system, it is suggested that ERTs should be confined to more technical and procedural roles, in line with their expertise. There would then be greater scope for the Facilitative Branch to assume a more comprehensive facilitative role, safeguarded by due process guarantees, in accordance with its mandate. However, if – as appears likely – the future compliance trajectories under the United Nations Framework Convention on Climate Change will include a significant role for ERTs without oversight by the Compliance Committee, it is important to develop appropriate procedural safeguards that reflect and shape the various technical and political roles these teams currently play.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider online trading in a single security with the objective of getting rich when its price ever exhibits a large upcrossing, without risking bankruptcy. We investigate payoff guarantees that are expressed in terms of the extremity of the upcrossings. We obtain an exact and elegant characterisation of the guarantees that can be achieved. Moreover, we derive a simple canonical strategy for each attainable guarantee.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian Commercial Law offers a concise yet comprehensive introduction to commercial law in Australia. The textbook provides a thorough and detailed discussion of a variety of topics in commercial law such as agency, bailment, the sale of goods, the transfer of property and the Personal Property Securities Act. The book also offers a detailed overview of topics within the Australian Consumer Law that are now relevant to commercial practice such as unconscionable conduct, consumer guarantees, and misleading and deceptive conduct. Written in a clear and accessible style, each chapter features key points and further reading to enhance students' understanding. Significant cases are discussed in detail and include excerpts from judgments to illustrate points of law. Australian Commercial Law is an indispensable resource for students who are seeking a comprehensive understanding of commercial law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle swarm optimization (PSO), a new population based algorithm, has recently been used on multi-robot systems. Although this algorithm is applied to solve many optimization problems as well as multi-robot systems, it has some drawbacks when it is applied on multi-robot search systems to find a target in a search space containing big static obstacles. One of these defects is premature convergence. This means that one of the properties of basic PSO is that when particles are spread in a search space, as time increases they tend to converge in a small area. This shortcoming is also evident on a multi-robot search system, particularly when there are big static obstacles in the search space that prevent the robots from finding the target easily; therefore, as time increases, based on this property they converge to a small area that may not contain the target and become entrapped in that area.Another shortcoming is that basic PSO cannot guarantee the global convergence of the algorithm. In other words, initially particles explore different areas, but in some cases they are not good at exploiting promising areas, which will increase the search time.This study proposes a method based on the particle swarm optimization (PSO) technique on a multi-robot system to find a target in a search space containing big static obstacles. This method is not only able to overcome the premature convergence problem but also establishes an efficient balance between exploration and exploitation and guarantees global convergence, reducing the search time by combining with a local search method, such as A-star.To validate the effectiveness and usefulness of algorithms,a simulation environment has been developed for conducting simulation-based experiments in different scenarios and for reporting experimental results. These experimental results have demonstrated that the proposed method is able to overcome the premature convergence problem and guarantee global convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2012, the High Court of Australia handed down a landmark decision on the plain packaging of tobacco products. This chapter considers the historic ruling in the case of JT International SA v Commonwealth; British American Tobacco Australasia Ltd v Commonwealth. This chapter explores several themes in the decision. First, it highlights the historical work by the High Court of Australia on the role of health regulation, the use of health warnings, and tobacco control. Second, the chapter considers the High Court of Australia's view that intellectual property law promotes the public interest.Third, it explores the High Court of Australia’s analysis of the constitutional law on acquisition of property on just terms. Finally, this chapter contends that the High Court of Australia's ruling on plain packaging of tobacco products will spark an 'Olive Revolution' — and will encourage superior courts and policy-makers to follow suit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the relationship between patent law and plant breeders' rights in light of modern developments in biotechnology. It examines how a number of superior courts have sought to manage the tensions and conflicts between these competing schemes of intellectual property protection. Part 1 considers the High Court of Australia case of Grain Pool of Western Australia v the Commonwealth dealing with Franklin barley. Part 2 examines the significance of the Supreme Court of the United States decision in JEM Ag Supply Inc v Pioneer Hi-Bred International Inc with respect to utility patents and hybrid seed. Part 3 considers the Supreme Court of Canada case of Harvard College v the Commissioner of Patents dealing with the transgenic animal, oncomouse, and discusses its implications for the forthcoming appeal from the Federal Court case of Percy Schmeiser v Monsanto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whereas Lessig's recent work engages with questions of culture and creativity in society, this paper looks at the role of culture and creativity in the law. The paper evaluates the Napster, DeCSS, Felten and Sklyarov litigation in terms of the new social, legal, economic and cultural relations being produced. This involves a deep discussion of law's economic relations, and the implications of this for litigation strategy. The paper concludes with a critique of recent attempts to define copyright law in terms of first amendment rights and communicative freedom.