909 resultados para Rademacher complexity bound
Resumo:
A rigorous lower bound solution, with the usage of the finite elements limit analysis, has been obtained for finding the ultimate bearing capacity of two interfering strip footings placed on a sandy medium. Smooth as well as rough footingsoil interfaces are considered in the analysis. The failure load for an interfering footing becomes always greater than that for a single isolated footing. The effect of the interference on the failure load (i) for rough footings becomes greater than that for smooth footings, (ii) increases with an increase in phi, and (iii) becomes almost negligible beyond S/B>3. Compared with various theoretical and experimental results reported in literature, the present analysis generally provides the lowest magnitude of the collapse load. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
In this paper we discuss a novel procedure for constructing clusters of bound particles in the case of a quantum integrable derivative delta-function Bose gas in one dimension. It is shown that clusters of bound particles can be constructed for this Bose gas for some special values of the coupling constant, by taking the quasi-momenta associated with the corresponding Bethe state to be equidistant points on a single circle in the complex momentum plane. We also establish a connection between these special values of the coupling constant and some fractions belonging to the Farey sequences in number theory. This connection leads to a classification of the clusters of bound particles associated with the derivative delta-function Bose gas and allows us to study various properties of these clusters like their size and their stability under the variation of the coupling constant. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Dominance and subordinate behaviors are important ingredients in the social organizations of group living animals. Behavioral observations on the two eusocial species Ropalidia marginata and Ropalidia cyathiformis suggest varying complexities in their social systems. The queen of R. cyathiformis is an aggressive individual who usually holds the top position in the dominance hierarchy although she does not necessarily show the maximum number of acts of dominance, while the R. marginata queen rarely shows aggression and usually does not hold the top position in the dominance hierarchy of her colony. In R. marginata, more workers are involved in dominance-subordinate interactions as compared to R. cyathiformis. These differences are reflected in the distribution of dominance-subordinate interactions among the hierarchically ranked individuals in both the species. The percentage of dominance interactions decreases gradually with hierarchical ranks in R. marginata while in R. cyathiformis it first increases and then decreases. We use an agent-based model to investigate the underlying mechanism that could give rise to the observed patterns for both the species. The model assumes, besides some non-interacting individuals, the interaction probabilities of the agents depend on their pre-differentiated winning abilities. Our simulations show that if the queen takes up a strategy of being involved in a moderate number of dominance interactions, one could get the pattern similar to R. cyathiformis, while taking up the strategy of very low interactions by the queen could lead to the pattern of R. marginata. We infer that both the species follow a common interaction pattern, while the differences in their social organization are due to the slight changes in queen as well as worker strategies. These changes in strategies are expected to accompany the evolution of more complex societies from simpler ones.
Resumo:
String theory and gauge/gravity duality suggest the lower bound of shear viscosity (eta) to entropy density (s) for any matter to be mu h/4 pi k(B), when h and k(B) are reduced Planck and Boltzmann constants respectively and mu <= 1. Motivated by this, we explore eta/s in black hole accretion flows, in order to understand if such exotic flows could be a natural site for the lowest eta/s. Accretion flow plays an important role in black hole physics in identifying the existence of the underlying black hole. This is a rotating shear flow with insignificant molecular viscosity, which could however have a significant turbulent viscosity, generating transport, heat and hence entropy in the flow. However, in presence of strong magnetic field, magnetic stresses can help in transporting matter independent of viscosity, via celebrated Blandford-Payne mechanism. In such cases, energy and then entropy produces via Ohmic dissipation. In,addition, certain optically thin, hot, accretion flows, of temperature greater than or similar to 10(9) K, may be favourable for nuclear burning which could generate/absorb huge energy, much higher than that in a star. We find that eta/s in accretion flows appears to be close to the lower bound suggested by theory, if they are embedded by strong magnetic field or producing nuclear energy, when the source of energy is not viscous effects. A lower bound on eta/s also leads to an upper bound on the Reynolds number of the flow.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
The linearization of the Drucker-Prager yield criterion associated with an axisymmetric problem has been achieved by simulating a sphere with the truncated icosahedron with 32 faces and 60 vertices. On this basis, a numerical formulation has been proposed for solving an axisymmetric stability problem with the usage of the lower-bound limit analysis, finite elements, and linear optimization. To compare the results, the linearization of the Mohr-Coulomb yield criterion, by replacing the three cones with interior polyhedron, as proposed earlier by Pastor and Turgeman for an axisymmetric problem, has also been implemented. The two formulations have been applied for determining the collapse loads for a circular footing resting on a cohesive-friction material with nonzero unit weight. The computational results are found to be quite convincing. (C) 2013 American Society of Civil Engineers.
Resumo:
In this letter, we propose a reduced-complexity implementation of partial interference cancellation group decoder with successive interference cancellation (PIC-GD-SIC) by employing the theory of displacement structures. The proposed algorithm exploits the block-Toeplitz structure of the effective matrix and chooses an ordering of the groups such that the zero-forcing matrices associated with the various groups are obtained through Schur recursions without any approximations. We show using an example that the proposed implementation offers a significantly reduced computational complexity compared to the direct approach without any loss in performance.
On the sphere decoding complexity of high-rate multigroup decodable STBCs in asymmetric MIMO systems
Resumo:
A space-time block code (STBC) is said to be multigroup decodable if the information symbols encoded by it can be partitioned into two or more groups such that each group of symbols can be maximum-likelihood (ML) decoded independently of the other symbol groups. In this paper, we show that the upper triangular matrix encountered during the sphere decoding of a linear dispersion STBC can be rank-deficient even when the rate of the code is less than the minimum of the number of transmit and receive antennas. We then show that all known families of high-rate (rate greater than 1) multigroup decodable codes have rank-deficient matrix even when the rate is less than the number of transmit and receive antennas, and this rank-deficiency problem arises only in asymmetric MIMO systems when the number of receive antennas is strictly less than the number of transmit antennas. Unlike the codes with full-rank matrix, the complexity of the sphere decoding-based ML decoder for STBCs with rank-deficient matrix is polynomial in the constellation size, and hence is high. We derive the ML sphere decoding complexity of most of the known high-rate multigroup decodable codes, and show that for each code, the complexity is a decreasing function of the number of receive antennas.
Resumo:
Generalizing a result (the case k = 1) due to M. A. Perles, we show that any polytopal upper bound sphere of odd dimension 2k + 1 belongs to the generalized Walkup class K-k(2k + 1), i.e., all its vertex links are k-stacked spheres. This is surprising since it is far from obvious that the vertex links of polytopal upper bound spheres should have any special combinatorial structure. It has been conjectured that for d not equal 2k + 1, all (k + 1)-neighborly members of the class K-k(d) are tight. The result of this paper shows that the hypothesis d not equal 2k + 1 is essential for every value of k >= 1.
Resumo:
Construction of high rate Space Time Block Codes (STBCs) with low decoding complexity has been studied widely using techniques such as sphere decoding and non Maximum-Likelihood (ML) decoders such as the QR decomposition decoder with M paths (QRDM decoder). Recently Ren et al., presented a new class of STBCs known as the block orthogonal STBCs (BOSTBCs), which could be exploited by the QRDM decoders to achieve significant decoding complexity reduction without performance loss. The block orthogonal property of the codes constructed was however only shown via simulations. In this paper, we give analytical proofs for the block orthogonal structure of various existing codes in literature including the codes constructed in the paper by Ren et al. We show that codes formed as the sum of Clifford Unitary Weight Designs (CUWDs) or Coordinate Interleaved Orthogonal Designs (CIODs) exhibit block orthogonal structure. We also provide new construction of block orthogonal codes from Cyclic Division Algebras (CDAs) and Crossed-Product Algebras (CPAs). In addition, we show how the block orthogonal property of the STBCs can be exploited to reduce the decoding complexity of a sphere decoder using a depth first search approach. Simulation results of the decoding complexity show a 30% reduction in the number of floating point operations (FLOPS) of BOSTBCs as compared to STBCs without the block orthogonal structure.
Resumo:
Transmit antenna selection (AS) is a popular, low hardware complexity technique that improves the performance of an underlay cognitive radio system, in which a secondary transmitter can transmit when the primary is on but under tight constraints on the interference it causes to the primary. The underlay interference constraint fundamentally changes the criterion used to select the antenna because the channel gains to the secondary and primary receivers must be both taken into account. We develop a novel and optimal joint AS and transmit power adaptation policy that minimizes a Chernoff upper bound on the symbol error probability (SEP) at the secondary receiver subject to an average transmit power constraint and an average primary interference constraint. Explicit expressions for the optimal antenna and power are provided in terms of the channel gains to the primary and secondary receivers. The SEP of the optimal policy is at least an order of magnitude lower than that achieved by several ad hoc selection rules proposed in the literature and even the optimal antenna selection rule for the case where the transmit power is either zero or a fixed value.
Resumo:
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) was introduced by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain an optimal ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.
Resumo:
Low complexity joint estimation of synchronization impairments and channel in a single-user MIMO-OFDM system is presented in this paper. Based on a system model that takes into account the effects of synchronization impairments such as carrier frequency offset, sampling frequency offset, and symbol timing error, and channel, a Maximum Likelihood (ML) algorithm for the joint estimation is proposed. To reduce the complexity of ML grid search, the number of received signal samples used for estimation need to be reduced. The conventional channel estimation techniques using Least-Squares (LS) or Maximum a posteriori (MAP) methods fail for the reduced sample under-determined system, which results in poor performance of the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS) based channel estimation method in a sparse fading scenario, where the received samples used for estimation are less than that required for an LS or MAP based estimation. The performance of the estimation method is studied through numerical simulations, and it is observed that CS based joint estimator performs better than LS and MAP based joint estimator. (C) 2013 Elsevier GmbH. All rights reserved.
Resumo:
The horizontal pullout capacity of vertical anchors embedded in sand has been determined by using an upper bound theorem of the limit analysis in combination with finite elements. The numerical results are presented in nondimensional form to determine the pullout resistance for various combinations of embedment ratio of the anchor (H/B), internal friction angle (ϕ) of sand, and the anchor-soil interface friction angle (δ). The pullout resistance increases with increases in the values of embedment ratio, friction angle of sand and anchor-soil interface friction angle. As compared to earlier reported solutions in literature, the present solution provides a better upper bound on the ultimate collapse load.