46 resultados para sociospatial inequality
Resumo:
This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.
Resumo:
Fuzzy multiobjective programming for a deterministic case involves maximizing the minimum goal satisfaction level among conflicting goals of different stakeholders using Max-min approach. Uncertainty due to randomness in a fuzzy multiobjective programming may be addressed by modifying the constraints using probabilistic inequality (e.g., Chebyshev’s inequality) or by addition of new constraints using statistical moments (e.g., skewness). Such modifications may result in the reduction of the optimal value of the system performance. In the present study, a methodology is developed to allow some violation in the newly added and modified constraints, and then minimizing the violation of those constraints with the objective of maximizing the minimum goal satisfaction level. Fuzzy goal programming is used to solve the multiobjective model. The proposed methodology is demonstrated with an application in the field of Waste Load Allocation (WLA) in a river system.
Resumo:
This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.
Resumo:
We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Resumo:
We present a novel multi-timescale Q-learning algorithm for average cost control in a Markov decision process subject to multiple inequality constraints. We formulate a relaxed version of this problem through the Lagrange multiplier method. Our algorithm is different from Q-learning in that it updates two parameters - a Q-value parameter and a policy parameter. The Q-value parameter is updated on a slower time scale as compared to the policy parameter. Whereas Q-learning with function approximation can diverge in some cases, our algorithm is seen to be convergent as a result of the aforementioned timescale separation. We show the results of experiments on a problem of constrained routing in a multistage queueing network. Our algorithm is seen to exhibit good performance and the various inequality constraints are seen to be satisfied upon convergence of the algorithm.
Resumo:
Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
The curvature (T)(w) of a contraction T in the Cowen-Douglas class B-1() is bounded above by the curvature (S*)(w) of the backward shift operator. However, in general, an operator satisfying the curvature inequality need not be contractive. In this paper, we characterize a slightly smaller class of contractions using a stronger form of the curvature inequality. Along the way, we find conditions on the metric of the holomorphic Hermitian vector bundle E-T corresponding to the operator T in the Cowen-Douglas class B-1() which ensures negative definiteness of the curvature function. We obtain a generalization for commuting tuples of operators in the class B-1() for a bounded domain in C-m.
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.
Resumo:
The Cubic Sieve Method for solving the Discrete Logarithm Problem in prime fields requires a nontrivial solution to the Cubic Sieve Congruence (CSC) x(3) equivalent to y(2)z (mod p), where p is a given prime number. A nontrivial solution must also satisfy x(3) not equal y(2)z and 1 <= x, y, z < p(alpha), where alpha is a given real number such that 1/3 < alpha <= 1/2. The CSC problem is to find an efficient algorithm to obtain a nontrivial solution to CSC. CSC can be parametrized as x equivalent to v(2)z (mod p) and y equivalent to v(3)z (mod p). In this paper, we give a deterministic polynomial-time (O(ln(3) p) bit-operations) algorithm to determine, for a given v, a nontrivial solution to CSC, if one exists. Previously it took (O) over tilde (p(alpha)) time in the worst case to determine this. We relate the CSC problem to the gap problem of fractional part sequences, where we need to determine the non-negative integers N satisfying the fractional part inequality {theta N} < phi (theta and phi are given real numbers). The correspondence between the CSC problem and the gap problem is that determining the parameter z in the former problem corresponds to determining N in the latter problem. We also show in the alpha = 1/2 case of CSC that for a certain class of primes the CSC problem can be solved deterministically in <(O)over tilde>(p(1/3)) time compared to the previous best of (O) over tilde (p(1/2)). It is empirically observed that about one out of three primes is covered by the above class. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
All triangulated d-manifolds satisfy the inequality ((f0-d-1)(2)) >= ((d+2)(2))beta(1) for d >= 3. A triangulated d-manifold is called tight neighborly if it attains equality in this bound. For each d >= 3, a (2d + 3)-vertex tight neighborly triangulation of the Sd-1-bundle over S-1 with beta(1) = 1 was constructed by Kuhnel in 1986. In this paper, it is shown that there does not exist a tight neighborly triangulated manifold with beta(1) = 2. In other words, there is no tight neighborly triangulation of (Sd-1 x S-1)(#2) or (Sd-1 (sic) S-1)(#2) for d >= 3. A short proof of the uniqueness of K hnel's complexes for d >= 4 under the assumption beta(1) not equal 0 is also presented.
Resumo:
In this paper, we propose an eigen framework for transmit beamforming for single-hop and dual-hop network models with single antenna receivers. In cases where number of receivers is not more than three, the proposed Eigen approach is vastly superior in terms of ease of implementation and computational complexity compared with the existing convex-relaxation-based approaches. The essential premise is that the precoding problems can be posed as equivalent optimization problems of searching for an optimal vector in the joint numerical range of Hermitian matrices. We show that the latter problem has two convex approximations: the first one is a semi-definite program that yields a lower bound on the solution, and the second one is a linear matrix inequality that yields an upper bound on the solution. We study the performance of the proposed and existing techniques using numerical simulations.
Resumo:
A triangulated d-manifold K, satisfies the inequality for da parts per thousand yen3. The triangulated d-manifolds that meet the bound with equality are called tight neighbourly. In this paper, we present tight neighbourly triangulations of 4-manifolds on 15 vertices with as an automorphism group. One such example was constructed by Bagchi and Datta (Discrete Math. 311 (citeyearbd102011) 986-995). We show that there are exactly 12 such triangulations up to isomorphism, 10 of which are orientable.