909 resultados para Rademacher complexity bound
Resumo:
We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .
Resumo:
In this paper we consider the issue of the Froissart bound on the high energy behaviour of total cross sections. This bound, originally derived using principles of analyticity of scattering amplitudes, is seen to be satisfied by all the available experimental data on total hadronic cross sections. At strong coupling, gauge/gravity duality has been used to provide some insights into this behaviour. In this work, we find the subleading terms to the so-derived Froissart bound from AdS/CFT. We find that a (ln s/s0) term is obtained, with a negative coefficient. We see that the fits to the currently available data confirm improvement in the fits due to the inclusion of such a term, with the appropriate sign. (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license.
Resumo:
Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.
Resumo:
The K-user multiple input multiple output (MIMO) Gaussian symmetric interference channel where each transmitter has M antennas and each receiver has N antennas is studied from a generalized degrees of freedom (GDOF) perspective. An inner bound on the GDOF is derived using a combination of techniques such as treating interference as noise, zero forcing (ZF) at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, as a function of the number of antennas and the log INR/log SNR level. Several interesting conclusions are drawn from the derived bounds. It is shown that when K > N/M + 1, a combination of the HK and IA schemes performs the best among the schemes considered. When N/M < K <= N/M + 1, the HK-scheme outperforms other schemes and is found to be GDOF optimal in many cases. In addition, when the SNR and INR are at the same level, ZF-receiving and the HK-scheme have the same GDOF performance.
Resumo:
Kinases are ubiquitous enzymes that are pivotal to many biochemical processes. There are contrasting views on the phosphoryl-transfer mechanism in propionate kinase, an enzyme that reversibly transfers a phosphoryl group from propionyl phosphate to ADP in the final step of non-oxidative catabolism of L-threonine to propionate. Here, X-ray crystal structures of propionate- and nucleotide-bound Salmonella typhimurium propionate kinase are reported at 1.8-2.0 angstrom resolution. Although the mode of nucleotide binding is comparable to those of other members of the ASKHA superfamily, propionate is bound at a distinct site deeper in the hydrophobic pocket defining the active site. The propionate carboxyl is at a distance of approximate to 5 angstrom from the -phosphate of the nucleotide, supporting a direct in-line transfer mechanism. The phosphoryl-transfer reaction is likely to occur via an associative S(N)2-like transition state that involves a pentagonal bipyramidal structure with the axial positions occupied by the nucleophile of the substrate and the O atom between the - and the -phosphates, respectively. The proximity of the strictly conserved His175 and Arg236 to the carboxyl group of the propionate and the -phosphate of ATP suggests their involvement in catalysis. Moreover, ligand binding does not induce global domain movement as reported in some other members of the ASKHA superfamily. Instead, residues Arg86, Asp143 and Pro116-Leu117-His118 that define the active-site pocket move towards the substrate and expel water molecules from the active site. The role of Ala88, previously proposed to be the residue determining substrate specificity, was examined by determining the crystal structures of the propionate-bound Ala88 mutants A88V and A88G. Kinetic analysis and structural data are consistent with a significant role of Ala88 in substrate-specificity determination. The active-site pocket-defining residues Arg86, Asp143 and the Pro116-Leu117-His118 segment are also likely to contribute to substrate specificity.
Resumo:
Let be a set of points in the plane. A geometric graph on is said to be locally Gabriel if for every edge in , the Euclidean disk with the segment joining and as diameter does not contain any points of that are neighbors of or in . A locally Gabriel graph(LGG) is a generalization of Gabriel graph and is motivated by applications in wireless networks. Unlike a Gabriel graph, there is no unique LGG on a given point set since no edge in a LGG is necessarily included or excluded. Thus the edge set of the graph can be customized to optimize certain network parameters depending on the application. The unit distance graph(UDG), introduced by Erdos, is also a LGG. In this paper, we show the following combinatorial bounds on edge complexity and independent sets of LGG: (i) For any , there exists LGG with edges. This improves upon the previous best bound of . (ii) For various subclasses of convex point sets, we show tight linear bounds on the maximum edge complexity of LGG. (iii) For any LGG on any point set, there exists an independent set of size .
Resumo:
A lower-bound limit analysis formulation, by using two-dimensional finite elements, the three-dimensional Mohr-Coulomb yield criterion, and nonlinear optimization, has been given to deal with an axisymmetric geomechanics stability problem. The optimization was performed using an interior point method based on the logarithmic barrier function. The yield surface was smoothened (1) by removing the tip singularity at the apex of the pyramid in the meridian plane and (2) by eliminating the stress discontinuities at the corners of the yield hexagon in the pi-plane. The circumferential stress (sigma(theta)) need not be assumed. With the proposed methodology, for a circular footing, the bearing-capacity factors N-c, N-q, and N-gamma for different values of phi have been computed. For phi = 0, the variation of N-c with changes in the factor m, which accounts for a linear increase of cohesion with depth, has been evaluated. Failure patterns for a few cases have also been drawn. The results from the formulation provide a good match with the solutions available from the literature. (C) 2014 American Society of Civil Engineers.
Resumo:
The ultimate bearing capacity of a circular footing, placed over rock mass, is evaluated by using the lower bound theorem of the limit analysis in conjunction with finite elements and nonlinear optimization. The generalized Hoek-Brown (HB) failure criterion, but by keeping a constant value of the exponent, alpha = 0.5, was used. The failure criterion was smoothened both in the meridian and pi planes. The nonlinear optimization was carried out by employing an interior point method based on the logarithmic barrier function. The results for the obtained bearing capacity were presented in a non-dimensional form for different values of GSI, m(i), sigma(ci)/(gamma b) and q/sigma(ci). Failure patterns were also examined for a few cases. For validating the results, computations were also performed for a strip footing as well. The results obtained from the analysis compare well with the data reported in literature. Since the equilibrium conditions are precisely satisfied only at the centroids of the elements, not everywhere in the domain, the obtained lower bound solution will be approximate not true. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a lower bound limit analysis approach for solving an axisymmetric stability problem by using the Drucker-Prager (D-P) yield cone in conjunction with finite elements and nonlinear optimization. In principal stress space, the tip of the yield cone has been smoothened by applying the hyperbolic approximation. The nonlinear optimization has been performed by employing an interior point method based on the logarithmic barrier function. A new proposal has also been given to simulate the D-P yield cone with the Mohr-Coulomb hexagonal yield pyramid. For the sake of illustration, bearing capacity factors N-c, N-q and N-gamma have been computed, as a function of phi, both for smooth and rough circular foundations. The results obtained from the analysis compare quite well with the solutions reported from literature.
Resumo:
Bearing capacity factors, N-c, N-q, and N-gamma, for a conical footing are determined by using the lower and upper bound axisymmetric formulation of the limit analysis in combination with finite elements and optimization. These factors are obtained in a bound form for a wide range of the values of cone apex angle (beta) and phi with delta = 0, 0.5 phi, and phi. The bearing capacity factors for a perfectly rough (delta = phi) conical footing generally increase with a decrease in beta. On the contrary, for delta = 0 degrees, the factors N-c and N-q reduce gradually with a decrease in beta. For delta = 0 degrees, the factor N-gamma for phi >= 35 degrees becomes a minimum for beta approximate to 90 degrees. For delta = 0 degrees, N-gamma for phi <= 30 degrees, as in the case of delta = phi, generally reduces with an increase in beta. The failure and nodal velocity patterns are also examined. The results compare well with different numerical solutions and centrifuge tests' data available from the literature.
Resumo:
We consider near-optimal policies for a single user transmitting on a wireless channel which minimize average queue length under average power constraint. The power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. Later, we also extend these results to the multiuser case. We show that our policies can be used in a system with energy harvesting sources at the transmitter. Next we consider data users which require minimum rate guarantees. Finally we consider the system which has both data and real time users. Our policies have low computational complexity, closed form expression for mean delays and require only the mean arrival rate with no queue length information.
Resumo:
The boxicity (respectively cubicity) of a graph G is the least integer k such that G can be represented as an intersection graph of axis-parallel k-dimensional boxes (respectively k-dimensional unit cubes) and is denoted by box(G) (respectively cub(G)). It was shown by Adiga and Chandran (2010) that for any graph G, cub(G) <= box(G) log(2) alpha(G], where alpha(G) is the maximum size of an independent set in G. In this note we show that cub(G) <= 2 log(2) X (G)] box(G) + X (G) log(2) alpha(G)], where x (G) is the chromatic number of G. This result can provide a much better upper bound than that of Adiga and Chandran for graph classes with bounded chromatic number. For example, for bipartite graphs we obtain cub(G) <= 2(box(G) + log(2) alpha(G)] Moreover, we show that for every positive integer k, there exist graphs with chromatic number k such that for every epsilon > 0, the value given by our upper bound is at most (1 + epsilon) times their cubicity. Thus, our upper bound is almost tight. (c) 2015 Elsevier B.V. All rights reserved.
Resumo:
In the POSSIBLE WINNER problem in computational social choice theory, we are given a set of partial preferences and the question is whether a distinguished candidate could be made winner by extending the partial preferences to linear preferences. Previous work has provided, for many common voting rules, fixed parameter tractable algorithms for the POSSIBLE WINNER problem, with number of candidates as the parameter. However, the corresponding kernelization question is still open and in fact, has been mentioned as a key research challenge 10]. In this paper, we settle this open question for many common voting rules. We show that the POSSIBLE WINNER problem for maximin, Copeland, Bucklin, ranked pairs, and a class of scoring rules that includes the Borda voting rule does not admit a polynomial kernel with the number of candidates as the parameter. We show however that the COALITIONAL MANIPULATION problem which is an important special case of the POSSIBLE WINNER problem does admit a polynomial kernel for maximin, Copeland, ranked pairs, and a class of scoring rules that includes the Borda voting rule, when the number of manipulators is polynomial in the number of candidates. A significant conclusion of our work is that the POSSIBLE WINNER problem is harder than the COALITIONAL MANIPULATION problem since the COALITIONAL MANIPULATION problem admits a polynomial kernel whereas the POSSIBLE WINNER problem does not admit a polynomial kernel. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A discussion has been provided for the comments raised by the discusser (Clausen, 2015)1] on the article recently published by the authors (Chakraborty and Kumar, 2015). The effect of exponent alpha for values of GSI approximately smaller than 30 becomes more critical. On the other hand, for greater values of GSI, the results obtained by the authors earlier remain primarily independent of alpha and can be easily used. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Boldyreva, Palacio and Warinschi introduced a multiple forking game as an extension of general forking. The notion of (multiple) forking is a useful abstraction from the actual simulation of cryptographic scheme to the adversary in a security reduction, and is achieved through the intermediary of a so-called wrapper algorithm. Multiple forking has turned out to be a useful tool in the security argument of several cryptographic protocols. However, a reduction employing multiple forking incurs a significant degradation of , where denotes the upper bound on the underlying random oracle calls and , the number of forkings. In this work we take a closer look at the reasons for the degradation with a tighter security bound in mind. We nail down the exact set of conditions for success in the multiple forking game. A careful analysis of the cryptographic schemes and corresponding security reduction employing multiple forking leads to the formulation of `dependence' and `independence' conditions pertaining to the output of the wrapper in different rounds. Based on the (in)dependence conditions we propose a general framework of multiple forking and a General Multiple Forking Lemma. Leveraging (in)dependence to the full allows us to improve the degradation factor in the multiple forking game by a factor of . By implication, the cost of a single forking involving two random oracles (augmented forking) matches that involving a single random oracle (elementary forking). Finally, we study the effect of these observations on the concrete security of existing schemes employing multiple forking. We conclude that by careful design of the protocol (and the wrapper in the security reduction) it is possible to harness our observations to the full extent.