9 resultados para Convex Duality
em Duke University
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.
Resumo:
We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.
Resumo:
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.
Resumo:
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Resumo:
PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.