6 resultados para Optimization problems

em DRUM (Digital Repository at the University of Maryland)


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless power transfer (WPT) and radio frequency (RF)-based energy har- vesting arouses a new wireless network paradigm termed as wireless powered com- munication network (WPCN), where some energy-constrained nodes are enabled to harvest energy from the RF signals transferred by other energy-sufficient nodes to support the communication operations in the network, which brings a promising approach for future energy-constrained wireless network design. In this paper, we focus on the optimal WPCN design. We consider a net- work composed of two communication groups, where the first group has sufficient power supply but no available bandwidth, and the second group has licensed band- width but very limited power to perform required information transmission. For such a system, we introduce the power and bandwidth cooperation between the two groups so that both group can accomplish their expected information delivering tasks. Multiple antennas are employed at the hybrid access point (H-AP) to en- hance both energy and information transfer efficiency and the cooperative relaying is employed to help the power-limited group to enhance its information transmission throughput. Compared with existing works, cooperative relaying, time assignment, power allocation, and energy beamforming are jointly designed in a single system. Firstly, we propose a cooperative transmission protocol for the considered system, where group 1 transmits some power to group 2 to help group 2 with information transmission and then group 2 gives some bandwidth to group 1 in return. Sec- ondly, to explore the information transmission performance limit of the system, we formulate two optimization problems to maximize the system weighted sum rate by jointly optimizing the time assignment, power allocation, and energy beamforming under two different power constraints, i.e., the fixed power constraint and the aver- age power constraint, respectively. In order to make the cooperation between the two groups meaningful and guarantee the quality of service (QoS) requirements of both groups, the minimal required data rates of the two groups are considered as constraints for the optimal system design. As both problems are non-convex and have no known solutions, we solve it by using proper variable substitutions and the semi-definite relaxation (SDR). We theoretically prove that our proposed solution method can guarantee to find the global optimal solution. Thirdly, consider that the WPCN has promising application potentials in future energy-constrained net- works, e.g., wireless sensor network (WSN), wireless body area network (WBAN) and Internet of Things (IoT), where the power consumption is very critical. We investigate the minimal power consumption optimal design for the considered co- operation WPCN. For this, we formulate an optimization problem to minimize the total consumed power by jointly optimizing the time assignment, power allocation, and energy beamforming under required data rate constraints. As the problem is also non-convex and has no known solutions, we solve it by using some variable substitutions and the SDR method. We also theoretically prove that our proposed solution method for the minimal power consumption design guarantees the global optimal solution. Extensive experimental results are provided to discuss the system performance behaviors, which provide some useful insights for future WPCN design. It shows that the average power constrained system achieves higher weighted sum rate than the fixed power constrained system. Besides, it also shows that in such a WPCN, relay should be placed closer to the multi-antenna H-AP to achieve higher weighted sum rate and consume lower total power.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safe operation of unmanned aerial vehicles (UAVs) over populated areas requires reducing the risk posed by a UAV if it crashed during its operation. We considered several types of UAV risk-based path planning problems and developed techniques for estimating the risk to third parties on the ground. The path planning problem requires making trade-offs between risk and flight time. Four optimization approaches for solving the problem were tested; a network-based approach that used a greedy algorithm to improve the original solution generated the best solutions with the least computational effort. Additionally, an approach for solving a combined design and path planning problems was developed and tested. This approach was extended to solve robust risk-based path planning problem in which uncertainty about wind conditions would affect the risk posed by a UAV.