982 resultados para Eigenvalue Bounds


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we treat some eigenvalue problems in periodically perforated domains and study the asymptotic behaviour of the eigenvalues and the eigenvectors when the number of holes in the domain increases to infinity Using the method of asymptotic expansion, we give explicit formula for the homogenized coefficients and expansion for eigenvalues and eigenvectors. If we denote by ε the size of each hole in the domain, then we obtain the following aysmptotic expansion for the eigenvalues: Dirichlet: λε = ε−2 λ + λ0 +O (ε), Stekloff: λε = ελ1 +O (ε2), Neumann: λε = λ0 + ελ1 +O (ε2).Using the method of energy, we prove a theorem of convergence in each case considered here. We briefly study correctors in the case of Neumann eigenvalue problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The eigenvalues and eigenfunctions corresponding to the three-dimensional equations for the linear elastic equilibrium of a clamped plate of thickness 2ϵ, are shown to converge (in a specific sense) to the eigenvalues and eigenfunctions of the well-known two-dimensional biharmonic operator of plate theory, as ϵ approaches zero. In the process, it is found in particular that the displacements and stresses are indeed of the specific forms usually assumed a priori in the literature. It is also shown that the limit eigenvalues and eigenfunctions can be equivalently characterized as the leading terms in an asymptotic expansion of the three-dimensional solutions, in terms of powers of ϵ. The method presented here applies equally well to the stationary problem of linear plate theory, as shown elsewhere by P. Destuynder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proving the unsatisfiability of propositional Boolean formulas has applications in a wide range of fields. Minimal Unsatisfiable Sets (MUS) are signatures of the property of unsatisfiability in formulas and our understanding of these signatures can be very helpful in answering various algorithmic and structural questions relating to unsatisfiability. In this paper, we explore some combinatorial properties of MUS and use them to devise a classification scheme for MUS. We also derive bounds on the sizes of MUS in Horn, 2-SAT and 3-SAT formulas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The repeated or closely spaced eigenvalues and corresponding eigenvectors of a matrix are usually very sensitive to a perturbation of the matrix, which makes capturing the behavior of these eigenpairs very difficult. Similar difficulty is encountered in solving the random eigenvalue problem when a matrix with random elements has a set of clustered eigenvalues in its mean. In addition, the methods to solve the random eigenvalue problem often differ in characterizing the problem, which leads to different interpretations of the solution. Thus, the solutions obtained from different methods become mathematically incomparable. These two issues, the difficulty of solving and the non-unique characterization, are addressed here. A different approach is used where instead of tracking a few individual eigenpairs, the corresponding invariant subspace is tracked. The spectral stochastic finite element method is used for analysis, where the polynomial chaos expansion is used to represent the random eigenvalues and eigenvectors. However, the main concept of tracking the invariant subspace remains mostly independent of any such representation. The approach is successfully implemented in response prediction of a system with repeated natural frequencies. It is found that tracking only an invariant subspace could be sufficient to build a modal-based reduced-order model of the system. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The availability of a reliable bound on an integral involving the square of the modulus of a form factor on the unitarity cut allows one to constrain the form factor at points inside the analyticity domain and its shape parameters, and also to isolate domains on the real axis and in the complex energy plane where zeros are excluded. In this lecture note, we review the mathematical techniques of this formalism in its standard form, known as the method of unitarity bounds, and recent developments which allow us to include information on the phase and modulus along a part of the unitarity cut. We also provide a brief summary of some results that we have obtained in the recent past, which demonstrate the usefulness of the method for precision predictions on the form factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We calculate upper and lower bounds on the modulus of the pion electromagnetic form factor on the unitarity cut below the omega pi inelastic threshold, using as input the phase in the elastic region known via the Fermi-Watson theorem from the pi pi P-wave phase shift, and a suitably weighted integral of the modulus squared above the inelastic threshold. The normalization at t = 0, the pion charge radius and experimental values at spacelike momenta are used as additional input information. The bounds are model independent, in the sense that they do not rely on specific parametrizations and do not require assumptions on the phase of the form factor above the inelastic threshold. The results provide nontrivial consistency checks on the recent experimental data on the modulus available below the omega pi threshold from e(+)e(-) annihilation and tau-decay experiments. In particular, at low energies the calculated bounds offer a more precise description of the modulus than the experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we derive Hybrid, Bayesian and Marginalized Cramer-Rao lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement vector Sparse Bayesian Learning (SBL) problem of estimating compressible vectors and their prior distribution parameters. We assume the unknown vector to be drawn from a compressible Student-prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We extend the MCRB to the case where the compressible vector is distributed according to a general compressible prior distribution, of which the generalized Pareto distribution is a special case. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error (MSE) in the estimates. Further, we illustrate the tightness and utility of the bounds through simulations, by comparing them with the MSE performance of two popular SBL-based estimators. We find that the MCRB is generally the tightest among the bounds derived and that the MSE performance of the Expectation-Maximization (EM) algorithm coincides with the MCRB for the compressible vector. We also illustrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector for several values of the number of observations and at different signal powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel numerical method based on a generalized eigenvalue decomposition for solving the diffusion equation governing the correlation diffusion of photons in turbid media. Medical imaging modalities such as diffuse correlation tomography and ultrasound-modulated optical tomography have the (elliptic) diffusion equation parameterized by a time variable as the forward model. Hitherto, for the computation of the correlation function, the diffusion equation is solved repeatedly over the time parameter. We show that the use of a certain time-independent generalized eigenfunction basis results in the decoupling of the spatial and time dependence of the correlation function, thus allowing greater computational efficiency in arriving at the forward solution. Besides presenting the mathematical analysis of the generalized eigenvalue problem on the basis of spectral theory, we put forth the numerical results that compare the proposed numerical method with the standard technique for solving the diffusion equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider bounds for the capacity region of the Gaussian X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. We first classify the XC into two classes, the strong XC and the mixed XC. In the strong XC, either the direct channels are stronger than the cross channels or vice-versa, whereas in the mixed XC, one of the direct channels is stronger than the corresponding cross channel and vice-versa. After this classification, we give outer bounds on the capacity region for each of the two classes. This is based on the idea that when one of the messages is eliminated from the XC, the rate region of the remaining three messages are enlarged. We make use of the Z channel, a system obtained by eliminating one message and its corresponding channel from the X channel, to bound the rate region of the remaining messages. The outer bound to the rate region of the remaining messages defines a subspace in R-+(4) and forms an outer bound to the capacity region of the XC. Thus, the outer bound to the capacity region of the XC is obtained as the intersection of the outer bounds to the four combinations of the rate triplets of the XC. Using these outer bounds on the capacity region of the XC, we derive new sum-rate outer bounds for both strong and mixed Gaussian XCs and compare them with those existing in literature. We show that the sum-rate outer bound for strong XC gives the sum-rate capacity in three out of the four sub-regions of the strong Gaussian XC capacity region. In case of mixed Gaussian XC, we recover the recent results in 11] which showed that the sum-rate capacity is achieved in two out of the three sub-regions of the mixed XC capacity region and give a simple alternate proof of the same.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use the recently measured accurate BaBaR data on the modulus of the pion electromagnetic form factor,Fπ(t), up to an energy of 3 GeV, the I=1P-wave phase of the π π scattering ampli-tude up to the ω−π threshold, the pion charge radius known from Chiral Perturbation Theory,and the recently measured JLAB value of Fπ in the spacelike region at t=−2.45GeV2 as inputs in a formalism that leads to bounds on Fπ in the intermediate spacelike region. We compare our constraints with experimental data and with perturbative QCD along with the results of several theoretical models for the non-perturbative contribution s proposed in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.