91 resultados para Random Regret Minimization
em Queensland University of Technology - ePrints Archive
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We study the influence of the choice of template in tensor-based morphometry. Using 3D brain MR images from 10 monozygotic twin pairs, we defined a tensor-based distance in the log-Euclidean framework [1] between each image pair in the study. Relative to this metric, twin pairs were found to be closer to each other on average than random pairings, consistent with evidence that brain structure is under strong genetic control. We also computed the intraclass correlation and associated permutation p-value at each voxel for the determinant of the Jacobian matrix of the transformation. The cumulative distribution function (cdf) of the p-values was found at each voxel for each of the templates and compared to the null distribution. Surprisingly, there was very little difference between CDFs of statistics computed from analyses using different templates. As the brain with least log-Euclidean deformation cost, the mean template defined here avoids the blurring caused by creating a synthetic image from a population, and when selected from a large population, avoids bias by being geometrically centered, in a metric that is sensitive enough to anatomical similarity that it can even detect genetic affinity among anatomies.
Resumo:
Channel measurements and simulations have been carried out to observe the effects of pedestrian movement on multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) channel capacity. An in-house built MIMO-OFDM packet transmission demonstrator equipped with four transmitters and four receivers has been utilized to perform channel measurements at 5.2 GHz. Variations in the channel capacity dynamic range have been analysed for 1 to 10 pedestrians and different antenna arrays (2 × 2, 3 × 3 and 4 × 4). Results show a predicted 5.5 bits/s/Hz and a measured 1.5 bits/s/Hz increment in the capacity dynamic range with the number of pedestrian and the number of antennas in the transmitter and receiver array.
Resumo:
Damage localization induced by strain softening can be predicted by the direct minimization of a global energy function. This article concerns the computational strategy for implementing this principle for softening materials such as concrete. Instead of using heuristic global optimization techniques, our strategies are a hybrid of local optimization methods with a path-finding approach to ensure a global optimum. With admissible nodal displacements being independent variables, it is easy to deal with the geometric (mesh) constraint conditions. The direct search optimization methods recover the localized solutions for a range of softening lattice models which are representative of quasi-brittle structures
Resumo:
Random Indexing K-tree is the combination of two algorithms suited for large scale document clustering.
Resumo:
In this paper, we consider a time-space fractional diffusion equation of distributed order (TSFDEDO). The TSFDEDO is obtained from the standard advection-dispersion equation by replacing the first-order time derivative by the Caputo fractional derivative of order α∈(0,1], the first-order and second-order space derivatives by the Riesz fractional derivatives of orders β 1∈(0,1) and β 2∈(1,2], respectively. We derive the fundamental solution for the TSFDEDO with an initial condition (TSFDEDO-IC). The fundamental solution can be interpreted as a spatial probability density function evolving in time. We also investigate a discrete random walk model based on an explicit finite difference approximation for the TSFDEDO-IC.
Resumo:
This paper describes the approach taken to the clustering task at INEX 2009 by a group at the Queensland University of Technology. The Random Indexing (RI) K-tree has been used with a representation that is based on the semantic markup available in the INEX 2009 Wikipedia collection. The RI K-tree is a scalable approach to clustering large document collections. This approach has produced quality clustering when evaluated using two different methodologies.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.