959 resultados para componentwise ultimate bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses a novel high-speed approach for human action recognition in H. 264/AVC compressed domain. The proposed algorithm utilizes cues from quantization parameters and motion vectors extracted from the compressed video sequence for feature extraction and further classification using Support Vector Machines (SVM). The ultimate goal of our work is to portray a much faster algorithm than pixel domain counterparts, with comparable accuracy, utilizing only the sparse information from compressed video. Partial decoding rules out the complexity of full decoding, and minimizes computational load and memory usage, which can effect in reduced hardware utilization and fast recognition results. The proposed approach can handle illumination changes, scale, and appearance variations, and is robust in outdoor as well as indoor testing scenarios. We have tested our method on two benchmark action datasets and achieved more than 85% accuracy. The proposed algorithm classifies actions with speed (>2000 fps) approximately 100 times more than existing state-of-the-art pixel-domain algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let P be a set of n points in R-d. A point x is said to be a centerpoint of P if x is contained in every convex object that contains more than dn/d+1 points of P. We call a point x a strong centerpoint for a family of objects C if x is an element of P is contained in every object C is an element of C that contains more than a constant fraction of points of P. A strong centerpoint does not exist even for halfspaces in R-2. We prove that a strong centerpoint exists for axis-parallel boxes in Rd and give exact bounds. We then extend this to small strong epsilon-nets in the plane. Let epsilon(S)(i) represent the smallest real number in 0, 1] such that there exists an epsilon(S)(i)-net of size i with respect to S. We prove upper and lower bounds for epsilon(S)(i) where S is the family of axis-parallel rectangles, halfspaces and disks. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rainbow connection number, rc(G), of a connected graph G is the minimum number of colors needed to color its edges so that every pair of vertices is connected by at least one path in which no two edges are colored the same (note that the coloring need not be proper). In this paper we study the rainbow connection number with respect to three important graph product operations (namely the Cartesian product, the lexicographic product and the strong product) and the operation of taking the power of a graph. In this direction, we show that if G is a graph obtained by applying any of the operations mentioned above on non-trivial graphs, then rc(G) a parts per thousand currency sign 2r(G) + c, where r(G) denotes the radius of G and . In general the rainbow connection number of a bridgeless graph can be as high as the square of its radius 1]. This is an attempt to identify some graph classes which have rainbow connection number very close to the obvious lower bound of diameter (and thus the radius). The bounds reported are tight up to additive constants. The proofs are constructive and hence yield polynomial time -factor approximation algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study the inverse mode shape problem for an Euler-Bernoulli beam, using an analytical approach. The mass and stiffness variations are determined for a beam, having various boundary conditions, which has a prescribed polynomial second mode shape with an internal node. It is found that physically feasible rectangular cross-section beams which satisfy the inverse problem exist for a variety of boundary conditions. The effect of the location of the internal node on the mass and stiffness variations and on the deflection of the beam is studied. The derived functions are used to verify the p-version finite element code, for the cantilever boundary condition. The paper also presents the bounds on the location of the internal node, for a valid mass and stiffness variation, for any given boundary condition. The derived property variations, corresponding to a given mode shape and boundary condition, also provides a simple closed-form solution for a class of non-uniform Euler-Bernoulli beams. These closed-form solutions can also be used to check optimization algorithms proposed for modal tailoring.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ultimate bearing capacity of a circular footing, placed over a soil mass which is reinforced with horizontal layers of circular reinforcement sheets, has been determined by using the upper bound theorem of the limit analysis in conjunction with finite elements and linear optimization. For performing the analysis, three different soil media have been separately considered, namely, (i) fully granular, (ii) cohesive frictional, and (iii) fully cohesive with an additional provision to account for an increase of cohesion with depth. The reinforcement sheets are assumed to be structurally strong to resist axial tension but without having any resistance to bending; such an approximation usually holds good for geogrid sheets. The shear failure between the reinforcement sheet and adjoining soil mass has been considered. The increase in the magnitudes of the bearing capacity factors (N-c and N-gamma) with an inclusion of the reinforcement has been computed in terms of the efficiency factors eta(c) and eta(gamma). The results have been obtained (i) for different values of phi in case of fully granular (c=0) and c-phi soils, and (ii) for different rates (m) at which the cohesion increases with depth for a purely cohesive soil (phi=0 degrees). The critical positions and corresponding optimum diameter of the reinforcement sheets, for achieving the maximum bearing capacity, have also been established. The increase in the bearing capacity with an employment of the reinforcement increases continuously with an increase in phi. The improvement in the bearing capacity becomes quite extensive for two layers of the reinforcements as compared to the single layer of the reinforcement. The results obtained from the study are found to compare well with the available theoretical and experimental data reported in literature. (C) 2014 The Japanese Geotechnical Society. Production and hosting by Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the aims of laboratory one-dimensional consolidation tests are prediction of the end of primary settlement, and determination of the coefficient of consolidation of soils required for the time rate of consolidation analysis from time-compression data. Of the many methods documented in the literature to achieve these aims, Asaoka's method is a simple and useful tool, and yet the most neglected one since its inception in the geotechnical engineering literature more than three decades ago. This paper appraises Asaoka's method, originally proposed for the field prediction of ultimate settlement, from the perspective of laboratory consolidation analysis along with recent developments. It is shown through experimental illustrations that Asaoka's method is simpler than the conventional and popular methods, and makes a satisfactory prediction of both the end of primary compression and the coefficient of consolidation from laboratory one-dimensional consolidation test data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of two-dimensional (2-D) phase retrieval from magnitude of the Fourier spectrum. We consider 2-D signals that are characterized by first-order difference equations, which have a parametric representation in the Fourier domain. We show that, under appropriate stability conditions, such signals can be reconstructed uniquely from the Fourier transform magnitude. We formulate the phase retrieval problem as one of computing the parameters that uniquely determine the signal. We show that the problem can be solved by employing the annihilating filter method, particularly for the case when the parameters are distinct. For the more general case of the repeating parameters, the annihilating filter method is not applicable. We circumvent the problem by employing the algebraically coupled matrix pencil (ACMP) method. In the noiseless measurement setup, exact phase retrieval is possible. We also establish a link between the proposed analysis and 2-D cepstrum. In the noisy case, we derive Cramer-Rao lower bounds (CRLBs) on the estimates of the parameters and present Monte Carlo performance analysis as a function of the noise level. Comparisons with state-of-the-art techniques in terms of signal reconstruction accuracy show that the proposed technique outperforms the Fienup and relaxed averaged alternating reflections (RAAR) algorithms in the presence of noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic omega pi form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the omega pi form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around 0.6 GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method is presented for determining the ultimate bearing capacity of a circular footing reinforced with a horizontal circular sheet of reinforcement placed over granular and cohesive-frictional soils. It was assumed that the reinforcement sheet could bear axial tension but not the bending moment. The analysis was performed based on the lower-bound theorem of the limit analysis in combination with finite elements and linear optimization. The present research is an extension of recent work with strip foundations reinforced with different layers of reinforcement. To incorporate the effect of the reinforcement, the efficiency factors eta(gamma) and eta(c), which need to be multiplied by the bearing capacity factors N-gamma and N-c, were established. Results were obtained for different values of the soil internal friction angle (phi). The optimal positions of the reinforcements, which would lead to a maximum improvement in the bearing capacity, were also determined. The variations of the axial tensile force in the reinforcement sheet at different radial distances from the center were also studied. The results of the analysis were compared with those available from literature. (C) 2014 American Society of Civil Engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We update the constraints on two-Higgs-doublet models (2HDMs) focusing on the parameter space relevant to explain the present muon g - 2 anomaly, Delta alpha(mu), in four different types of models, type I, II, ``lepton specific'' (or X) and ``flipped'' (or Y). We show that the strong constraints provided by the electroweak precision data on the mass of the pseudoscalar Higgs, whose contribution may account for Delta alpha(mu), are evaded in regions where the charged scalar is degenerate with the heavy neutral one and the mixing angles alpha and beta satisfy the Standard Model limit beta - alpha approximate to pi/2. We combine theoretical constraints from vacuum stability and perturbativity with direct and indirect bounds arising from collider and B physics. Possible future constraints from the electron g - 2 are also considered. If the 126 GeV resonance discovered at the LHC is interpreted as the light CP-even Higgs boson of the 2HDM, we find that only models of type X can satisfy all the considered theoretical and experimental constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of delay-constrained, energy-efficient broadcast in cooperative wireless networks is NP-complete. While centralised setting allows some heuristic solutions, designing heuristics in distributed implementation poses significant challenges. This is more so in wireless sensor networks (WSNs) where nodes are deployed randomly and topology changes dynamically due to node failure/join and environment conditions. This paper demonstrates that careful design of network infrastructure can achieve guaranteed delay bounds and energy-efficiency, and even meet quality of service requirements during broadcast. The paper makes three prime contributions. First, we present an optimal lower bound on energy consumption for broadcast that is tighter than what has been previously proposed. Next, iSteiner, a lightweight, distributed and deterministic algorithm for creation of network infrastructure is discussed. iPercolate is the algorithm that exploits this structure to cooperatively broadcast information with guaranteed delivery and delay bounds, while allowing real-time traffic to pass undisturbed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated with small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan et al, in which recovery from a single erasure is considered. By studying the generalized Hamming weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Turan graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from 2 erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of combined additions of Ca and Sb on the microstructure and tensile properties of AZ91D alloy fabricated by squeeze-casting have been investigated. For comparison, the same has also been studied with and without individual additions of Ca and Sb. The results indicate that both individual and combined additions refine the grain size and beta-Mg17Al12 phase, which is more pronounced with combined additions. Besides alpha-Mg and beta-Mg17Al12 phases, a new reticular Al2Ca and rod-shaped Mg3Sb2 phases are formed following individual additions of Ca and Sb in the AZ91D alloy. With combined additions, an additional Ca2Sb phase is formed suppressing Mg3Sb2 phase. Additions of both Ca and Sb increase yield strength (YS) at both ambient and elevated temperatures up to 200 degrees C. However, both ductility and ultimate tensile strength (UTS) decrease first up to 150 degrees C and then increase at 200 degrees C. The increase in YS is attributed to the refinement of grain size, whereas, ductility and UTS are deteriorated by the presence of brittle Al2Ca, Mg3Sb2 and Ca2Sb phases. The best tensile properties are obtained in the AZXY9110 alloy owing to the presence of lesser amount of brittle Al2Ca and Ca2Sb phases resulted from the optimum content of 1.0Ca and 0.3Sb (wt%). The fracture surface of the tensile specimen tested at ambient temperature reveals cleavage failure that changes to quasi-cleavage at 200 degrees C. The squeeze-cast alloys exhibited better tensile properties as compared to that of the gravity-cast alloys nullifying the detrimental effects of Ca and/or Sb additions. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

India needs to significantly increase its electricity consumption levels, in a sustainable manner, if it has to ensure rapid economic development, a goal that remains the most potent tool for delivering adaptation capacity to its poor who will suffer the worst consequences of climate change. Resource/supply constraints faced by conventional energy sources, techno-economic constraints faced by renewable energy sources, and the bounds imposed by climate change on fossil fuel use are likely to undermine India's quest for having a robust electricity system that can effectively contribute to achieving accelerated, sustainable and inclusive economic growth. One possible way out could be transitioning into a sustainable electricity system, which is a trade-off solution having taken into account the economic, social and environmental concerns. As a first step toward understanding this transition, we contribute an indicator based hierarchical multidimensional framework as an analytical tool for sustainability assessment of electricity systems, and validate it for India's national electricity system. We evaluate Indian electricity system using this framework by comparing it with a hypothetical benchmark sustainable electrical system, which was created using best indicator values realized across national electricity systems in the world. This framework, we believe, can be used to examine the social, economic and environmental implications of the current Indian electricity system as well as setting targets for future development. The analysis with the indicator framework provides a deeper understanding of the system, identify and quantify the prevailing sustainability gaps and generate specific targets for interventions. We use this framework to compute national electricity system sustainability index (NESSI) for India. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A rainbow matching of an edge-colored graph G is a matching in which no two edges have the same color. There have been several studies regarding the maximum size of a rainbow matching in a properly edge-colored graph G in terms of its minimum degree 3(G). Wang (2011) asked whether there exists a function f such that a properly edge-colored graph G with at least f (delta(G)) vertices is guaranteed to contain a rainbow matching of size delta(G). This was answered in the affirmative later: the best currently known function Lo and Tan (2014) is f(k) = 4k - 4, for k >= 4 and f (k) = 4k - 3, for k <= 3. Afterwards, the research was focused on finding lower bounds for the size of maximum rainbow matchings in properly edge-colored graphs with fewer than 4 delta(G) - 4 vertices. Strong edge-coloring of a graph G is a restriction of proper edge-coloring where every color class is required to be an induced matching, instead of just being a matching. In this paper, we give lower bounds for the size of a maximum rainbow matching in a strongly edge-colored graph Gin terms of delta(G). We show that for a strongly edge-colored graph G, if |V(G)| >= 2 |3 delta(G)/4|, then G has a rainbow matching of size |3 delta(G)/4|, and if |V(G)| < 2 |3 delta(G)/4|, then G has a rainbow matching of size |V(G)|/2] In addition, we prove that if G is a strongly edge-colored graph that is triangle-free, then it contains a rainbow matching of size at least delta(G). (C) 2015 Elsevier B.V. All rights reserved.