869 resultados para Cauchy-Schwarz Inequality
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
In this article, we obtain explicit solutions of a system of forced Burgers equation subject to some classes of bounded and compactly supported initial data and also subject to certain unbounded initial data. In a series of papers, Rao and Yadav (2010) 1-3] obtained explicit solutions of a nonhomogeneous Burgers equation in one dimension subject to certain classes of bounded and unbounded initial data. Earlier Kloosterziel (1990) 4] represented the solution of an initial value problem for the heat equation, with initial data in L-2 (R-n, e(vertical bar x vertical bar 2/2)), as a series of self-similar solutions of the heat equation in R-n. Here we express the solutions of certain classes of Cauchy problems for a system of forced Burgers equation in terms of self-similar solutions of some linear partial differential equations. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
The curvature (T)(w) of a contraction T in the Cowen-Douglas class B-1() is bounded above by the curvature (S*)(w) of the backward shift operator. However, in general, an operator satisfying the curvature inequality need not be contractive. In this paper, we characterize a slightly smaller class of contractions using a stronger form of the curvature inequality. Along the way, we find conditions on the metric of the holomorphic Hermitian vector bundle E-T corresponding to the operator T in the Cowen-Douglas class B-1() which ensures negative definiteness of the curvature function. We obtain a generalization for commuting tuples of operators in the class B-1() for a bounded domain in C-m.
Resumo:
Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of ``how much'' information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on ``what'' is coded by primary afferents. Amongst the kinematic variables tested-position, velocity, and acceleration-primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80-90%. The final 10-20% were found to be due to non-linear coding by spike bursts.
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.
Resumo:
The Cubic Sieve Method for solving the Discrete Logarithm Problem in prime fields requires a nontrivial solution to the Cubic Sieve Congruence (CSC) x(3) equivalent to y(2)z (mod p), where p is a given prime number. A nontrivial solution must also satisfy x(3) not equal y(2)z and 1 <= x, y, z < p(alpha), where alpha is a given real number such that 1/3 < alpha <= 1/2. The CSC problem is to find an efficient algorithm to obtain a nontrivial solution to CSC. CSC can be parametrized as x equivalent to v(2)z (mod p) and y equivalent to v(3)z (mod p). In this paper, we give a deterministic polynomial-time (O(ln(3) p) bit-operations) algorithm to determine, for a given v, a nontrivial solution to CSC, if one exists. Previously it took (O) over tilde (p(alpha)) time in the worst case to determine this. We relate the CSC problem to the gap problem of fractional part sequences, where we need to determine the non-negative integers N satisfying the fractional part inequality {theta N} < phi (theta and phi are given real numbers). The correspondence between the CSC problem and the gap problem is that determining the parameter z in the former problem corresponds to determining N in the latter problem. We also show in the alpha = 1/2 case of CSC that for a certain class of primes the CSC problem can be solved deterministically in <(O)over tilde>(p(1/3)) time compared to the previous best of (O) over tilde (p(1/2)). It is empirically observed that about one out of three primes is covered by the above class. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The parent compound of iron chalcogenide superconductors, Fe1+yTe, with a range of excess Fe concentrations exhibits intriguing structural and magnetic properties. Here, the interplay of magnetic and structural properties of Fe1.12Te single crystals have been probed by low-temperature synchrotron X-ray powder diffraction, magnetization, and specific heat measurements. Thermodynamic measurements reveal two distinct phase transitions, considered unique to samples possessing excess Fe content in the range of 0.11 <= y <= 0.13. On cooling, an antiferromagnetic transition, T-N approximate to 57K is observed. A closer examination of powder diffraction data suggests that the transition at TN is not purely magnetic, but accompanied by the commencement of a structural phase transition from tetragonal to orthorhombic symmetry. This is followed by a second prominent first-order structural transition at T-S with T-S < T-N, where an onset of monoclinic distortion is observed. The results point to a strong magneto-structural coupling in this material. (C) 2014 AIP Publishing LLC.
Resumo:
All triangulated d-manifolds satisfy the inequality ((f0-d-1)(2)) >= ((d+2)(2))beta(1) for d >= 3. A triangulated d-manifold is called tight neighborly if it attains equality in this bound. For each d >= 3, a (2d + 3)-vertex tight neighborly triangulation of the Sd-1-bundle over S-1 with beta(1) = 1 was constructed by Kuhnel in 1986. In this paper, it is shown that there does not exist a tight neighborly triangulated manifold with beta(1) = 2. In other words, there is no tight neighborly triangulation of (Sd-1 x S-1)(#2) or (Sd-1 (sic) S-1)(#2) for d >= 3. A short proof of the uniqueness of K hnel's complexes for d >= 4 under the assumption beta(1) not equal 0 is also presented.
Resumo:
We address the problem of reconstructing a sparse signal from its DFT magnitude. We refer to this problem as the sparse phase retrieval (SPR) problem, which finds applications in tomography, digital holography, electron microscopy, etc. We develop a Fienup-type iterative algorithm, referred to as the Max-K algorithm, to enforce sparsity and successively refine the estimate of phase. We show that the Max-K algorithm possesses Cauchy convergence properties under certain conditions, that is, the MSE of reconstruction does not increase with iterations. We also formulate the problem of SPR as a feasibility problem, where the goal is to find a signal that is sparse in a known basis and whose Fourier transform magnitude is consistent with the measurement. Subsequently, we interpret the Max-K algorithm as alternating projections onto the object-domain and measurement-domain constraint sets and generalize it to a parameterized relaxation, known as the relaxed averaged alternating reflections (RAAR) algorithm. On the application front, we work with measurements acquired using a frequency-domain optical-coherence tomography (FDOCT) experimental setup. Experimental results on measured data show that the proposed algorithms exhibit good reconstruction performance compared with the direct inversion technique, homomorphic technique, and the classical Fienup algorithm without sparsity constraint; specifically, the autocorrelation artifacts and background noise are suppressed to a significant extent. We also demonstrate that the RAAR algorithm offers a broader framework for FDOCT reconstruction, of which the direct inversion technique and the proposed Max-K algorithm become special instances corresponding to specific values of the relaxation parameter.
Resumo:
We have studied the influence of Al doping on the microstructural, optical, and electrical properties of spray-deposited WO3 thin films. XRD analyses confirm that all the films are of polycrystalline WO3 in nature, possessing monoclinic structure. EDX profiles of the Al-doped films show aluminum peaks implying incorporation of Al ions into WO3 lattice. On Al doping, the average crystallite size decreases due to increase in the density of nucleation centers at the time of film growth. The observed variation in the lattice parameter values on Al doping is attributed to the incorporation of Al ions into WO3 lattice. Enhancement in the direct optical band gap compared to the undoped film has been observed on Al doping due to decrease in the width of allowed energy states near the conduction band edge. The refractive indices of the films follow the Cauchy relation of normal dispersion. Electrical resistivity compared to the undoped film has been found to increase on Al doping.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
Motivated by the recent proposal for the S-matrix in AdS(3) x S-3 with mixed three form fluxes, we study classical folded string spinning in AdS(3) with both Ramond and Neveu-Schwarz three form fluxes. We solve the equations of motion of these strings and obtain their dispersion relation to the leading order in the Neveu-Schwarz flux b. We show that dispersion relation for the spinning strings with large spin S acquires a term given by -root lambda/2 pi b(2) log(2) S in addition to the usual root lambda/pi log S term where root lambda is proportional to the square of the radius of AdS(3). Using SO(2, 2) transformations and re-parmetrizations we show that these spinning strings can be related to light like Wilson loops in AdS(3) with Neveu-Schwarz flux b. We observe that the logarithmic divergence in the area of the light like Wilson loop is also deformed by precisely the same coefficient of the b(2) log(2) S term in the dispersion relation of the spinning string. This result indicates that the coefficient of b(2) log(2) S has a property similar to the coefficient of the log S term, known as cusp-anomalous dimension, and can possibly be determined to all orders in the coupling lambda using the recent proposal for the S-matrix.
Resumo:
We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar, Dukkipati, & Bhatnagar, 2014), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on SF algorithms by generalizing the class of perturbation distributions as most distributions reported in the literature for which SF algorithms are known to work turn out to be special cases of the q-Gaussian distribution. Besides studying the convergence properties of our algorithm analytically, we also show the results of numerical simulations on a model of a queuing network, that illustrate the significance of the proposed method. In particular, we observe that our algorithm performs better in most cases, over a wide range of q-values, in comparison to Newton SF algorithms with the Gaussian and Cauchy perturbations, as well as the gradient q-Gaussian SF algorithms. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose an eigen framework for transmit beamforming for single-hop and dual-hop network models with single antenna receivers. In cases where number of receivers is not more than three, the proposed Eigen approach is vastly superior in terms of ease of implementation and computational complexity compared with the existing convex-relaxation-based approaches. The essential premise is that the precoding problems can be posed as equivalent optimization problems of searching for an optimal vector in the joint numerical range of Hermitian matrices. We show that the latter problem has two convex approximations: the first one is a semi-definite program that yields a lower bound on the solution, and the second one is a linear matrix inequality that yields an upper bound on the solution. We study the performance of the proposed and existing techniques using numerical simulations.