866 resultados para Lower Bounds
Resumo:
We address the problem of two-dimensional (2-D) phase retrieval from magnitude of the Fourier spectrum. We consider 2-D signals that are characterized by first-order difference equations, which have a parametric representation in the Fourier domain. We show that, under appropriate stability conditions, such signals can be reconstructed uniquely from the Fourier transform magnitude. We formulate the phase retrieval problem as one of computing the parameters that uniquely determine the signal. We show that the problem can be solved by employing the annihilating filter method, particularly for the case when the parameters are distinct. For the more general case of the repeating parameters, the annihilating filter method is not applicable. We circumvent the problem by employing the algebraically coupled matrix pencil (ACMP) method. In the noiseless measurement setup, exact phase retrieval is possible. We also establish a link between the proposed analysis and 2-D cepstrum. In the noisy case, we derive Cramer-Rao lower bounds (CRLBs) on the estimates of the parameters and present Monte Carlo performance analysis as a function of the noise level. Comparisons with state-of-the-art techniques in terms of signal reconstruction accuracy show that the proposed technique outperforms the Fienup and relaxed averaged alternating reflections (RAAR) algorithms in the presence of noise.
Resumo:
Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic omega pi form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the omega pi form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around 0.6 GeV.
Resumo:
A rainbow matching of an edge-colored graph G is a matching in which no two edges have the same color. There have been several studies regarding the maximum size of a rainbow matching in a properly edge-colored graph G in terms of its minimum degree 3(G). Wang (2011) asked whether there exists a function f such that a properly edge-colored graph G with at least f (delta(G)) vertices is guaranteed to contain a rainbow matching of size delta(G). This was answered in the affirmative later: the best currently known function Lo and Tan (2014) is f(k) = 4k - 4, for k >= 4 and f (k) = 4k - 3, for k <= 3. Afterwards, the research was focused on finding lower bounds for the size of maximum rainbow matchings in properly edge-colored graphs with fewer than 4 delta(G) - 4 vertices. Strong edge-coloring of a graph G is a restriction of proper edge-coloring where every color class is required to be an induced matching, instead of just being a matching. In this paper, we give lower bounds for the size of a maximum rainbow matching in a strongly edge-colored graph Gin terms of delta(G). We show that for a strongly edge-colored graph G, if |V(G)| >= 2 |3 delta(G)/4|, then G has a rainbow matching of size |3 delta(G)/4|, and if |V(G)| < 2 |3 delta(G)/4|, then G has a rainbow matching of size |V(G)|/2] In addition, we prove that if G is a strongly edge-colored graph that is triangle-free, then it contains a rainbow matching of size at least delta(G). (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Conditions for the existence of heterochromatic Hamiltonian paths and cycles in edge colored graphs are well investigated in literature. A related problem in this domain is to obtain good lower bounds for the length of a maximum heterochromatic path in an edge colored graph G. This problem is also well explored by now and the lower bounds are often specified as functions of the minimum color degree of G - the minimum number of distinct colors occurring at edges incident to any vertex of G - denoted by v(G). Initially, it was conjectured that the lower bound for the length of a maximum heterochromatic path for an edge colored graph G would be 2v(G)/3]. Chen and Li (2005) showed that the length of a maximum heterochromatic path in an edge colored graph G is at least v(G) - 1, if 1 <= v(G) <= 7, and at least 3v(G)/5] + 1 if v(G) >= 8. They conjectured that the tight lower bound would be v(G) - 1 and demonstrated some examples which achieve this bound. An unpublished manuscript from the same authors (Chen, Li) reported to show that if v(G) >= 8, then G contains a heterochromatic path of length at least 120 + 1. In this paper, we give lower bounds for the length of a maximum heterochromatic path in edge colored graphs without small cycles. We show that if G has no four cycles, then it contains a heterochromatic path of length at least v(G) - o(v(G)) and if the girth of G is at least 4 log(2)(v(G)) + 2, then it contains a heterochromatic path of length at least v(G) - 2, which is only one less than the bound conjectured by Chen and Li (2005). Other special cases considered include lower bounds for the length of a maximum heterochromatic path in edge colored bipartite graphs and triangle-free graphs: for triangle-free graphs we obtain a lower bound of 5v(G)/6] and for bipartite graphs we obtain a lower bound of 6v(G)-3/7]. In this paper, it is also shown that if the coloring is such that G has no heterochromatic triangles, then G contains a heterochromatic path of length at least 13v(G)/17)]. This improves the previously known 3v(G)/4] bound obtained by Chen and Li (2011). We also give a relatively shorter and simpler proof showing that any edge colored graph G contains a heterochromatic path of length at least (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Groundwater management involves conflicting objectives as maximization of discharge contradicts the criteria of minimum pumping cost and minimum piping cost. In addition, available data contains uncertainties such as market fluctuations, variations in water levels of wells and variations of ground water policies. A fuzzy model is to be evolved to tackle the uncertainties, and a multiobjective optimization is to be conducted to simultaneously satisfy the contradicting objectives. Towards this end, a multiobjective fuzzy optimization model is evolved. To get at the upper and lower bounds of the individual objectives, particle Swarm optimization (PSO) is adopted. The analytic element method (AEM) is employed to obtain the operating potentio metric head. In this study, a multiobjective fuzzy optimization model considering three conflicting objectives is developed using PSO and AEM methods for obtaining a sustainable groundwater management policy. The developed model is applied to a case study, and it is demonstrated that the compromise solution satisfies all the objectives with adequate levels of satisfaction. Sensitivity analysis is carried out by varying the parameters, and it is shown that the effect of any such variation is quite significant. Copyright (c) 2015 John Wiley & Sons, Ltd.
Resumo:
This paper considers decentralized spectrum sensing, i.e., detection of occupancy of the primary users' spectrum by a set of Cognitive Radio (CR) nodes, under a Bayesian set-up. The nodes use energy detection to make their individual decisions, which are combined at a Fusion Center (FC) using the K-out-of-N fusion rule. The channel from the primary transmitter to the CR nodes is assumed to undergo fading, while that from the nodes to the FC is assumed to be error-free. In this scenario, a novel concept termed as the Error Exponent with a Confidence Level (EECL) is introduced to evaluate and compare the performance of different detection schemes. Expressions for the EECL under general fading conditions are derived. As a special case, it is shown that the conventional error exponent both at individual sensors, and at the FC is zero. Further, closed-form lower bounds on the EECL are derived under Rayleigh fading and lognormal shadowing. As an example application, it answers the question of whether to use pilot-signal based narrowband sensing, where the signal undergoes Rayleigh fading, or to sense over the entire bandwidth of a wideband signal, where the signal undergoes lognormal shadowing. Theoretical results are validated using Monte Carlo simulations. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The concept of ''Saturation Impulse'' for rigid, perfectly plastic structures with finite-deflections subjected to dynamic loading was put forward by Zhao, Yu and Fang (1994a). This paper extends the concept of Saturation Impulse to the analysis of structures such as simply supported circular plates, simply supported and fully clamped square plates, and cylindrical shells subjected to rectangular pressure pulses in the medium load range. Both upper and lower bounds of nondimensional saturation impulses are presented.
Resumo:
The paper has two major contributions to the theory of repeated games. First, we build a supergame oligopoly model where firms compete in supply functions, we show how collusion sustainability is affected by the presence of a convex cost function, the magnitude of both the slope of demand market, and the number of rivals. Then, we compare the results with those of the traditional Cournot reversion under the same structural characteristics. We find how depending on the number of firms and the slope of the linear demand, collusion sustainability is easier under supply function than under Cournot competition. The conclusions of the models are simulated with data from the Spanish wholesale electricity market to predict lower bounds of the discount factors.
Resumo:
In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
The matrices studied here are positive stable (or briefly stable). These are matrices, real or complex, whose eigenvalues have positive real parts. A theorem of Lyapunov states that A is stable if and only if there exists H ˃ 0 such that AH + HA* = I. Let A be a stable matrix. Three aspects of the Lyapunov transformation LA :H → AH + HA* are discussed.
1. Let C1 (A) = {AH + HA* :H ≥ 0} and C2 (A) = {H: AH+HA* ≥ 0}. The problems of determining the cones C1(A) and C2(A) are still unsolved. Using solvability theory for linear equations over cones it is proved that C1(A) is the polar of C2(A*), and it is also shown that C1 (A) = C1(A-1). The inertia assumed by matrices in C1(A) is characterized.
2. The index of dissipation of A was defined to be the maximum number of equal eigenvalues of H, where H runs through all matrices in the interior of C2(A). Upper and lower bounds, as well as some properties of this index, are given.
3. We consider the minimal eigenvalue of the Lyapunov transform AH+HA*, where H varies over the set of all positive semi-definite matrices whose largest eigenvalue is less than or equal to one. Denote it by ψ(A). It is proved that if A is Hermitian and has eigenvalues μ1 ≥ μ2…≥ μn ˃ 0, then ψ(A) = -(μ1-μn)2/(4(μ1 + μn)). The value of ψ(A) is also determined in case A is a normal, stable matrix. Then ψ(A) can be expressed in terms of at most three of the eigenvalues of A. If A is an arbitrary stable matrix, then upper and lower bounds for ψ(A) are obtained.
Resumo:
We give a one-pass, O~(m^{1-2/k})-space algorithm for estimating the k-th frequency moment of a data stream for any real k>2. Together with known lower bounds, this resolves the main problem left open by Alon, Matias, Szegedy, STOC'96. Our algorithm enables deletions as well as insertions of stream elements.
Resumo:
We consider a fault model of Boolean gates, both classical and quantum, where some of the inputs may not be connected to the actual gate hardware. This model is somewhat similar to the stuck-at model which is a very popular model in testing Boolean circuits. We consider the problem of detecting such faults; the detection algorithm can query the faulty gate and its complexity is the number of such queries. This problem is related to determining the sensitivity of Boolean functions. We show how quantum parallelism can be used to detect such faults. Specifically, we show that a quantum algorithm can detect such faults more efficiently than a classical algorithm for a Parity gate and an AND gate. We give explicit constructions of quantum detector algorithms and show lower bounds for classical algorithms. We show that the model for detecting such faults is similar to algebraic decision trees and extend some known results from quantum query complexity to prove some of our results.
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.