986 resultados para Subset Sum Problem
Resumo:
Motivated by the idea of designing a structure for a desired mode shape, intended towards applications such as resonant sensors, actuators and vibration confinement, we present the inverse mode shape problem for bars, beams and plates in this work. The objective is to determine the cross-sectional profile of these structures, given a mode shape, boundary condition and the mass. The contribution of this article is twofold: (i) A numerical method to solve this problem when a valid mode shape is provided in the finite element framework for both linear and nonlinear versions of the problem. (ii) An analytical result to prove the uniqueness and existence of the solution in the case of bars. This article also highlights a very important question of the validity of a mode shape for any structure of given boundary conditions.
Resumo:
We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.
Resumo:
In pay-per-click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their advertisements (ads for short). A sponsored search auction for a keyword is typically conducted for a number of rounds (say T). There are click probabilities mu(ij) associated with each agent slot pair (agent i and slot j). The search engine would like to maximize the social welfare of the advertisers, that is, the sum of values of the advertisers for the keyword. However, the search engine does not know the true values advertisers have for a click to their respective advertisements and also does not know the click probabilities. A key problem for the search engine therefore is to learn these click probabilities during the initial rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced. These mechanisms, due to their connection to the multi-armed bandit problem, are aptly referred to as multi-armed bandit (MAB) mechanisms. When m = 1, exact characterizations for truthful MAB mechanisms are available in the literature. Recent work has focused on the more realistic but non-trivial general case when m > 1 and a few promising results have started appearing. In this article, we consider this general case when m > 1 and prove several interesting results. Our contributions include: (1) When, mu(ij)s are unconstrained, we prove that any truthful mechanism must satisfy strong pointwise monotonicity and show that the regret will be Theta T7) for such mechanisms. (2) When the clicks on the ads follow a certain click precedence property, we show that weak pointwise monotonicity is necessary for MAB mechanisms to be truthful. (3) If the search engine has a certain coarse pre-estimate of mu(ij) values and wishes to update them during the course of the T rounds, we show that weak pointwise monotonicity and type-I separatedness are necessary while weak pointwise monotonicity and type-II separatedness are sufficient conditions for the MAB mechanisms to be truthful. (4) If the click probabilities are separable into agent-specific and slot-specific terms, we provide a characterization of MAB mechanisms that are truthful in expectation.
Resumo:
We derive exact expressions for the zeroth and the first three spectral moment sum rules for the retarded Green's function and for the zeroth and the first spectral moment sum rules for the retarded self-energy of the inhomogeneous Bose-Hubbard model in nonequilibrium, when the local on-site repulsion and the chemical potential are time-dependent, and in the presence of an external time-dependent electromagnetic field. We also evaluate these expressions for the homogeneous case in equilibrium, where all time dependence and external fields vanish. Unlike similar sum rules for the Fermi-Hubbard model, in the Bose-Hubbard model case, the sum rules often depend on expectation values that cannot be determined simply from parameters in the Hamiltonian like the interaction strength and chemical potential but require knowledge of equal-time many-body expectation values from some other source. We show how one can approximately evaluate these expectation values for the Mott-insulating phase in a systematic strong-coupling expansion in powers of the hopping divided by the interaction. We compare the exact moment relations to the calculated moments of spectral functions determined from a variety of different numerical approximations and use them to benchmark their accuracy. DOI: 10.1103/PhysRevA.87.013628
Resumo:
The n-interior point variant of the Erdos-Szekeres problem is to show the following: For any n, n-1, every point set in the plane with sufficient number of interior points contains a convex polygon containing exactly n-interior points. This has been proved only for n-3. In this paper, we prove it for pointsets having atmost logarithmic number of convex layers. We also show that any pointset containing atleast n interior points, there exists a 2-convex polygon that contains exactly n-interior points.
Resumo:
In this paper, we study duty cycling and power management in a network of energy harvesting sensor (EHS) nodes. We consider a one-hop network, where K EHS nodes send data to a destination over a wireless fading channel. The goal is to find the optimum duty cycling and power scheduling across the nodes that maximizes the average sum data rate, subject to energy neutrality at each node. We adopt a two-stage approach to simplify the problem. In the inner stage, we solve the problem of optimal duty cycling of the nodes, subject to the short-term power constraint set by the outer stage. The outer stage sets the short-term power constraints on the inner stage to maximize the long-term expected sum data rate, subject to long-term energy neutrality at each node. Albeit suboptimal, our solutions turn out to have a surprisingly simple form: the duty cycle allotted to each node by the inner stage is simply the fractional allotted power of that node relative to the total allotted power. The sum power allotted is a clipped version of the sum harvested power across all the nodes. The average sum throughput thus ultimately depends only on the sum harvested power and its statistics. We illustrate the performance improvement offered by the proposed solution compared to other naive schemes via Monte-Carlo simulations.
Resumo:
In this paper, we study the asymptotic behavior of an optimal control problem for the time-dependent Kirchhoff-Love plate whose middle surface has a very rough boundary. We identify the limit problem which is an optimal control problem for the limit equation with a different cost functional.
Resumo:
In this paper we consider the downlink of an OFDM cellular system. The objective is to maximise the system utility by means of fractional frequency reuse and interference planning. The problem is a joint scheduling and power allocation problem. Using gradient scheduling scheme, the above problem is transformed to a problem of maximising weighted sum-rate at each time slot. At each slot, an iterative scheduling and power allocation algorithm is employed to address the weighted sum-rate maximisation problem. The power allocation problem in the above algorithm is a nonconvex optimisation problem. We study several algorithms that can tackle this part of the problem. We propose two modifications to the above algorithms to address practical and computational feasibility. Finally, we compare the performance of our algorithm with some existing algorithms based on certain achieved system utility metrics. We show that the practical considerations do not affect the system performance adversely.
Resumo:
The problem of designing good space-time block codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well-known technique of decoding STBCs as conditional ML (CML) decoding. In this paper, we introduce a new framework to construct ML decoders for STBCs based on the generalized distributive law (GDL) and the factor-graph-based sum-product algorithm. We say that an STBC is fast GDL decodable if the order of GDL decoding complexity of the code, with respect to the constellation size, is strictly less than M-lambda, where lambda is the number of independent symbols in the STBC. We give sufficient conditions for an STBC to admit fast GDL decoding, and show that both multigroup and conditionally multigroup decodable codes are fast GDL decodable. For any STBC, whether fast GDL decodable or not, we show that the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from cyclic division algebras which is not multigroup or conditionally multigroup decodable, the GDL decoder provides about 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only half as complex as the CML decoder.
Resumo:
Outlier detection in high dimensional categorical data has been a problem of much interest due to the extensive use of qualitative features for describing the data across various application areas. Though there exist various established methods for dealing with the dimensionality aspect through feature selection on numerical data, the categorical domain is actively being explored. As outlier detection is generally considered as an unsupervised learning problem due to lack of knowledge about the nature of various types of outliers, the related feature selection task also needs to be handled in a similar manner. This motivates the need to develop an unsupervised feature selection algorithm for efficient detection of outliers in categorical data. Addressing this aspect, we propose a novel feature selection algorithm based on the mutual information measure and the entropy computation. The redundancy among the features is characterized using the mutual information measure for identifying a suitable feature subset with less redundancy. The performance of the proposed algorithm in comparison with the information gain based feature selection shows its effectiveness for outlier detection. The efficacy of the proposed algorithm is demonstrated on various high-dimensional benchmark data sets employing two existing outlier detection methods.
Resumo:
Wilking has recently shown that one can associate a Ricci flow invariant cone of curvature operators , which are nonnegative in a suitable sense, to every invariant subset . In this article we show that if is an invariant subset of such that is closed and denotes the cone of curvature operators which are positive in the appropriate sense then one of the two possibilities holds: (a) The connected sum of any two Riemannian manifolds with curvature operators in also admits a metric with curvature operator in (b) The normalized Ricci flow on any compact Riemannian manifold with curvature operator in converges to a metric of constant positive sectional curvature. We also point out that if is an arbitrary subset, then is contained in the cone of curvature operators with nonnegative isotropic curvature.
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.
Exact internal controllability for a hyperbolic problem in a domain with highly oscillating boundary
Resumo:
In this paper, by using the Hilbert Uniqueness Method (HUM), we study the exact controllability problem described by the wave equation in a three-dimensional horizontal domain bounded at the bottom by a smooth wall and at the top by a rough wall. The latter is assumed to consist in a plane wall covered with periodically distributed asperities whose size depends on a small parameter epsilon > 0, and with a fixed height. Our aim is to obtain the exact controllability for the homogenized equation. In the process, we study the asymptotic analysis of wave equation in two setups, namely solution by standard weak formulation and solution by transposition method.
Resumo:
The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.
Resumo:
After a brief discussion of the history of the problem, we propose a generalization of the map coloring problem to higher dimensions.