918 resultados para Random Regret Minimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, open circuit Bayer refineries pump seawater directly into their operations to neutralize the caustic fraction of the Bayer residue. The resulting supernatant has a reduced pH and is pumped back to the marine environment. This investigation has assessed modified seawater sources generated from nanofiltration processes to compare their relative capacities to neutralize bauxite residues. An assessment of the chemical stability of the neutralization products, neutralization efficiency, discharge water quality, bauxite residue composition, and associated economic benefits have been considered to determine the most preferable seawater filtration process based on implementation costs, savings to operations and environmental benefits. The mechanism of neutralization for each technology was determined to be predominately due to the formation of Bayer hydrotalcite and calcium carbonate, however variations in neutralization capacity and efficiencies have been observed. The neutralization efficiency of each feed source has been found to be dependent on the concentration of magnesium, aluminium, calcium and carbonate. Nanofiltered seawater with approximately double the amount of magnesium and calcium required half the volume of seawater to achieve the same degree of neutralization. These studies have revealed that multiple neutralization steps occur throughout the process using characterization techniques such as X-ray diffraction (XRD), infrared (IR) spectroscopy and inductively coupled plasma optical emission spectroscopy (ICP-OES).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Performance guarantees for online learning algorithms typically take the form of regret bounds, which express that the cumulative loss overhead compared to the best expert in hindsight is small. In the common case of large but structured expert sets we typically wish to keep the regret especially small compared to simple experts, at the cost of modest additional overhead compared to more complex others. We study which such regret trade-offs can be achieved, and how. We analyse regret w.r.t. each individual expert as a multi-objective criterion in the simple but fundamental case of absolute loss. We characterise the achievable and Pareto optimal trade-offs, and the corresponding optimal strategies for each sample size both exactly for each finite horizon and asymptotically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0⩽trandom collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where fsi(·)'s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali–Sidney scheme is a special case of this general construction. We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO2 emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public acceptance is consistently listed as having an enormous impact on the implementation and success of a congestion charge scheme. This paper investigates public acceptance of such a scheme in Australia. Surveys were conducted in Brisbane and Melbourne, the two fastest growing Australian cities. Using an ordered logit modeling approach, the survey data including stated preferences were analyzed to pinpoint the important factors influencing people’s attitudes to a congestion charge and, in turn, to their transport mode choices. To accommodate the nature of, and to account for the resulting heterogeneity of the panel data, random effects were considered in the models. As expected, this study found that the amount of the congestion charge and the financial benefits of implementing it have a significant influence on respondents’ support for the charge and on the likelihood of their taking a bus to city areas. However, respondents’ current primary transport mode for travelling to the city areas has a more pronounced impact. Meanwhile, respondents’ perceptions of the congestion charge’s role in protecting the environment by reducing vehicle emissions, and of the extent to which the charge would mean that they travelled less frequently to the city for shopping or entertainment, also have a significant impact on their level of support for its implementation. We also found and explained notable differences across two cities. Finally, findings from this study have been fully discussed in relation to the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Active learning approaches reduce the annotation cost required by traditional supervised approaches to reach the same effectiveness by actively selecting informative instances during the learning phase. However, effectiveness and robustness of the learnt models are influenced by a number of factors. In this paper we investigate the factors that affect the effectiveness, more specifically in terms of stability and robustness, of active learning models built using conditional random fields (CRFs) for information extraction applications. Stability, defined as a small variation of performance when small variation of the training data or a small variation of the parameters occur, is a major issue for machine learning models, but even more so in the active learning framework which aims to minimise the amount of training data required. The factors we investigate are a) the choice of incremental vs. standard active learning, b) the feature set used as a representation of the text (i.e., morphological features, syntactic features, or semantic features) and c) Gaussian prior variance as one of the important CRFs parameters. Our empirical findings show that incremental learning and the Gaussian prior variance lead to more stable and robust models across iterations. Our study also demonstrates that orthographical, morphological and contextual features as a group of basic features play an important role in learning effective models across all iterations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.