273 resultados para FINITE SETS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the epsilon-greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the phase diagram of the ionic Hubbard model (IHM) at half filling on a Bethe lattice of infinite connectivity using dynamical mean-field theory (DMFT), with two impurity solvers, namely, iterated perturbation theory (IPT) and continuous time quantum Monte Carlo (CTQMC). The physics of the IHM is governed by the competition between the staggered ionic potential Delta and the on-site Hubbard U. We find that for a finite Delta and at zero temperature, long-range antiferromagnetic (AFM) order sets in beyond a threshold U = U-AF via a first-order phase transition. For U smaller than U-AF the system is a correlated band insulator. Both methods show a clear evidence for a quantum transition to a half-metal (HM) phase just after the AFM order is turned on, followed by the formation of an AFM insulator on further increasing U. We show that the results obtained within both methods have good qualitative and quantitative consistency in the intermediate-to-strong-coupling regime at zero temperature as well as at finite temperature. On increasing the temperature, the AFM order is lost via a first-order phase transition at a transition temperature T-AF(U,Delta) or, equivalently, on decreasing U below U-AF(T,Delta)], within both methods, for weak to intermediate values of U/t. In the strongly correlated regime, where the effective low-energy Hamiltonian is the Heisenberg model, IPT is unable to capture the thermal (Neel) transition from the AFM phase to the paramagnetic phase, but the CTQMC does. At a finite temperature T, DMFT + CTQMC shows a second phase transition (not seen within DMFT + IPT) on increasing U beyond U-AF. At U-N > U-AF, when the Neel temperature T-N for the effective Heisenberg model becomes lower than T, the AFM order is lost via a second-order transition. For U >> Delta, T-N similar to t(2)/U(1 - x(2)), where x = 2 Delta/U and thus T-N increases with increase in Delta/U. In the three-dimensional parameter space of (U/t, T/t, and Delta/t), as T increases, the surface of first-order transition at U-AF(T,Delta) and that of the second-order transition at U-N(T,Delta) approach each other, shrinking the range over which the AFM order is stable. There is a line of tricritical points that separates the surfaces of first- and second-order phase transitions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent approach for the construction of constant dimension subspace codes, designed for error correction in random networks, is to consider the codes as orbits of suitable subgroups of the general linear group. In particular, a cyclic orbit code is the orbit of a cyclic subgroup. Hence a possible method to construct large cyclic orbit codes with a given minimum subspace distance is to select a subspace such that the orbit of the Singer subgroup satisfies the distance constraint. In this paper we propose a method where some basic properties of difference sets are employed to select such a subspace, thereby providing a systematic way of constructing cyclic orbit codes with specified parameters. We also present an explicit example of such a construction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A residual based a posteriori error estimator is derived for a quadratic finite element method (FEM) for the elliptic obstacle problem. The error estimator involves various residuals consisting of the data of the problem, discrete solution and a Lagrange multiplier related to the obstacle constraint. The choice of the discrete Lagrange multiplier yields an error estimator that is comparable with the error estimator in the case of linear FEM. Further, an a priori error estimate is derived to show that the discrete Lagrange multiplier converges at the same rate as that of the discrete solution of the obstacle problem. The numerical experiments of adaptive FEM show optimal order convergence. This demonstrates that the quadratic FEM for obstacle problem exhibits optimal performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present results for a finite variant of the one-dimensional Toom model with closed boundaries. We show that the steady state distribution is not of product form, but is nonetheless simple. In particular, we give explicit formulas for the densities and some nearest neighbour correlation functions. We also give exact results for eigenvalues and multiplicities of the transition matrix using the theory of R-trivial monoids in joint work with A. Schilling, B. Steinberg and N. M. Thiery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ultimate bearing capacity of a circular footing, placed over rock mass, is evaluated by using the lower bound theorem of the limit analysis in conjunction with finite elements and nonlinear optimization. The generalized Hoek-Brown (HB) failure criterion, but by keeping a constant value of the exponent, alpha = 0.5, was used. The failure criterion was smoothened both in the meridian and pi planes. The nonlinear optimization was carried out by employing an interior point method based on the logarithmic barrier function. The results for the obtained bearing capacity were presented in a non-dimensional form for different values of GSI, m(i), sigma(ci)/(gamma b) and q/sigma(ci). Failure patterns were also examined for a few cases. For validating the results, computations were also performed for a strip footing as well. The results obtained from the analysis compare well with the data reported in literature. Since the equilibrium conditions are precisely satisfied only at the centroids of the elements, not everywhere in the domain, the obtained lower bound solution will be approximate not true. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, an abstract framework for the error analysis of discontinuous Galerkin methods for control constrained optimal control problems is developed. The analysis establishes the best approximation result from a priori analysis point of view and delivers a reliable and efficient a posteriori error estimator. The results are applicable to a variety of problems just under the minimal regularity possessed by the well-posedness of the problem. Subsequently, the applications of C-0 interior penalty methods for a boundary control problem as well as a distributed control problem governed by the biharmonic equation subject to simply supported boundary conditions are discussed through the abstract analysis. Numerical experiments illustrate the theoretical findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bearing capacity factors, N-c, N-q, and N-gamma, for a conical footing are determined by using the lower and upper bound axisymmetric formulation of the limit analysis in combination with finite elements and optimization. These factors are obtained in a bound form for a wide range of the values of cone apex angle (beta) and phi with delta = 0, 0.5 phi, and phi. The bearing capacity factors for a perfectly rough (delta = phi) conical footing generally increase with a decrease in beta. On the contrary, for delta = 0 degrees, the factors N-c and N-q reduce gradually with a decrease in beta. For delta = 0 degrees, the factor N-gamma for phi >= 35 degrees becomes a minimum for beta approximate to 90 degrees. For delta = 0 degrees, N-gamma for phi <= 30 degrees, as in the case of delta = phi, generally reduces with an increase in beta. The failure and nodal velocity patterns are also examined. The results compare well with different numerical solutions and centrifuge tests' data available from the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider the problem of power allocation in MIMO wiretap channel for secrecy in the presence of multiple eavesdroppers. Perfect knowledge of the destination channel state information (CSI) and only the statistical knowledge of the eavesdroppers CSI are assumed. We first consider the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we transform the secrecy rate max-min optimization problem to a single maximization problem. We use generalized singular value decomposition and transform the problem to a concave maximization problem which maximizes the sum secrecy rate of scalar wiretap channels subject to linear constraints on the transmit covariance matrix. We then consider the MIMO wiretap channel with finite-alphabet input. We show that the transmit covariance matrix obtained for the case of Gaussian input, when used in the MIMO wiretap channel with finite-alphabet input, can lead to zero secrecy rate at high transmit powers. We then propose a power allocation scheme with an additional power constraint which alleviates this secrecy rate loss problem, and gives non-zero secrecy rates at high transmit powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we study the well-known r-DIMENSIONAL k-MATCHING ((r, k)-DM), and r-SET k-PACKING ((r, k)-SP) problems. Given a universe U := U-1 ... U-r and an r-uniform family F subset of U-1 x ... x U-r, the (r, k)-DM problem asks if F admits a collection of k mutually disjoint sets. Given a universe U and an r-uniform family F subset of 2(U), the (r, k)-SP problem asks if F admits a collection of k mutually disjoint sets. We employ techniques based on dynamic programming and representative families. This leads to a deterministic algorithm with running time O(2.851((r-1)k) .vertical bar F vertical bar. n log(2)n . logW) for the weighted version of (r, k)-DM, where W is the maximum weight in the input, and a deterministic algorithm with running time O(2.851((r-0.5501)k).vertical bar F vertical bar.n log(2) n . logW) for the weighted version of (r, k)-SP. Thus, we significantly improve the previous best known deterministic running times for (r, k)-DM and (r, k)-SP and the previous best known running times for their weighted versions. We rely on structural properties of (r, k)-DM and (r, k)-SP to develop algorithms that are faster than those that can be obtained by a standard use of representative sets. Incorporating the principles of iterative expansion, we obtain a better algorithm for (3, k)-DM, running in time O(2.004(3k).vertical bar F vertical bar . n log(2)n). We believe that this algorithm demonstrates an interesting application of representative families in conjunction with more traditional techniques. Furthermore, we present kernels of size O(e(r)r(k-1)(r) logW) for the weighted versions of (r, k)-DM and (r, k)-SP, improving the previous best known kernels of size O(r!r(k-1)(r) logW) for these problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) using near-infrared light is a promising tool for non-invasive imaging of deep tissue. This technique is capable of quantitative reconstruction of absorption (mu(a)) and scattering coefficient (mu(s)) inhomogeneities in the tissue. The rationale for reconstructing the optical property map is that the absorption coefficient variation provides diagnostic information about metabolic and disease states of the tissue. The aim of DOT is to reconstruct the internal tissue cross section with good spatial resolution and contrast from noisy measurements non-invasively. We develop a region-of-interest scanning system based on DOT principles. Modulated light is injected into the phantom/tissue through one of the four light emitting diode sources. The light traversing through the tissue gets partially absorbed and scattered multiple times. The intensity and phase of the exiting light are measured using a set of photodetectors. The light transport through a tissue is diffusive in nature and is modeled using radiative transfer equation. However, a simplified model based on diffusion equation (DE) can be used if the system satisfies following conditions: (a) the optical parameter of the inhomogeneity is close to the optical property of the background, and (b) mu(s) of the medium is much greater than mu(a) (mu(s) >> mu(a)). The light transport through a highly scattering tissue satisfies both of these conditions. A discrete version of DE based on finite element method is used for solving the inverse problem. The depth of probing light inside the tissue depends on the wavelength of light, absorption, and scattering coefficients of the medium and the separation between the source and detector locations. Extensive simulation studies have been carried out and the results are validated using two sets of experimental measurements. The utility of the system can be further improved by using multiple wavelength light sources. In such a scheme, the spectroscopic variation of absorption coefficient in the tissue can be used to arrive at the oxygenation changes in the tissue. (C) 2016 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schemes that can be proven to be unconditionally stable in the linear context can yield unstable solutions when used to solve nonlinear dynamical problems. Hence, the formulation of numerical strategies for nonlinear dynamical problems can be particularly challenging. In this work, we show that time finite element methods because of their inherent energy momentum conserving property (in the case of linear and nonlinear elastodynamics), provide a robust time-stepping method for nonlinear dynamic equations (including chaotic systems). We also show that most of the existing schemes that are known to be robust for parabolic or hyperbolic problems can be derived within the time finite element framework; thus, the time finite element provides a unification of time-stepping schemes used in diverse disciplines. We demonstrate the robust performance of the time finite element method on several challenging examples from the literature where the solution behavior is known to be chaotic. (C) 2015 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schemes that can be proven to be unconditionally stable in the linear context can yield unstable solutions when used to solve nonlinear dynamical problems. Hence, the formulation of numerical strategies for nonlinear dynamical problems can be particularly challenging. In this work, we show that time finite element methods because of their inherent energy momentum conserving property (in the case of linear and nonlinear elastodynamics), provide a robust time-stepping method for nonlinear dynamic equations (including chaotic systems). We also show that most of the existing schemes that are known to be robust for parabolic or hyperbolic problems can be derived within the time finite element framework; thus, the time finite element provides a unification of time-stepping schemes used in diverse disciplines. We demonstrate the robust performance of the time finite element method on several challenging examples from the literature where the solution behavior is known to be chaotic. (C) 2015 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A discussion has been provided for the comments raised by the discusser (Clausen, 2015)1] on the article recently published by the authors (Chakraborty and Kumar, 2015). The effect of exponent alpha for values of GSI approximately smaller than 30 becomes more critical. On the other hand, for greater values of GSI, the results obtained by the authors earlier remain primarily independent of alpha and can be easily used. (C) 2015 Elsevier Ltd. All rights reserved.