978 resultados para Zero-lower bound


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effects of crack depth (a/W) and specimen width W on the fracture toughness and ductile±brittle transition have been investigated using three-point bend specimens. Finite element analysis is employed to obtain the stress-strain fields ahead of the crack tip. The results show that both normalized crack depth (a/W) and specimen width (W) affect the fracture toughness and ductile±brittle fracture transition. The measured crack tip opening displacement decreases and ductile±brittle transition occurs with increasing crack depth (a/W) from 0.1 to 0.2 and 0.3. At a fixed a/W (0.2 or 0.3), all specimens fail by cleavage prior to ductile tearing when specimen width W increases from 25 to 40 and 50 mm. The lower bound fracture toughness is not sensitive to crack depth and specimen width. Finite element analysis shows that the opening stress in the remaining ligament is elevated with increasing crack depth or specimen width due to the increase of in-plane constraint. The average local cleavage stress is dependent on both crack depth and specimen width but its lower bound value is not sensitive to constraint level. No fixed distance can be found from the cleavage initiation site to the crack tip and this distance increases gradually with decreasing inplane constraint.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0⩽tbound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cold-formed high strength steel members are increasingly used as primary load bearing components in low rise buildings. Lipped channel beam (LCB) is one of the most commonly used flexural members in these applications. In this research an experimental study was undertaken to investigate the shear behaviour and strengths of LCB sections. Simply supported test specimens of back to back LCBs with aspect ratios of 1.0 and 1.5 were loaded at mid-span until failure. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. The ultimate shear capacity results obtained from the tests were compared with the predictions from the current design rules in Australian/NewZealand and American cold-formed steel design standards. This comparison showed that these shear design rules are very conservative as they did not include the post-buckling strength observed in the shear tests and the higher shear buckling coefficient due to the additional fixity along the web-flange juncture. Improved shear design equations are proposed in this paper by including the above beneficial effects. Suitable lower bound design rules were also developed under the direct strength method format. This paper presents the details of this experimental study and the results including the improved design rules for the shear capacity of LCBs. It also includes the details of tests of LCBs subject to combined shear and flange distortion, and combined bending and shear actions, and proposes suitable design rules to predict the capacities in these cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: The prevalence of anaplastic lymphoma kinase (ALK) gene fusion (ALK positivity) in early-stage non-small-cell lung cancer (NSCLC) varies by population examined and detection method used. The Lungscape ALK project was designed to address the prevalence and prognostic impact of ALK positivity in resected lung adenocarcinoma in a primarily European population. METHODS: Analysis of ALK status was performed by immunohistochemistry (IHC) and fluorescent in situ hybridization (FISH) in tissue sections of 1,281 patients with adenocarcinoma in the European Thoracic Oncology Platform Lungscape iBiobank. Positive patients were matched with negative patients in a 1:2 ratio, both for IHC and for FISH testing. Testing was performed in 16 participating centers, using the same protocol after passing external quality assessment. RESULTS: Positive ALK IHC staining was present in 80 patients (prevalence of 6.2%; 95% CI, 4.9% to 7.6%). Of these, 28 patients were ALK FISH positive, corresponding to a lower bound for the prevalence of FISH positivity of 2.2%. FISH specificity was 100%, and FISH sensitivity was 35.0% (95% CI, 24.7% to 46.5%), with a sensitivity value of 81.3% (95% CI, 63.6% to 92.8%) for IHC 2+/3+ patients. The hazard of death for FISH-positive patients was lower than for IHC-negative patients (P = .022). Multivariable models, adjusted for patient, tumor, and treatment characteristics, and matched cohort analysis confirmed that ALK FISH positivity is a predictor for better overall survival (OS). CONCLUSION: In this large cohort of surgically resected lung adenocarcinomas, the prevalence of ALK positivity was 6.2% using IHC and at least 2.2% using FISH. A screening strategy based on IHC or H-score could be envisaged. ALK positivity (by either IHC or FISH) was related to better OS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we present concrete collision and preimage attacks on a large class of compression function constructions making two calls to the underlying ideal primitives. The complexity of the collision attack is above the theoretical lower bound for constructions of this type, but below the birthday complexity; the complexity of the preimage attack, however, is equal to the theoretical lower bound. We also present undesirable properties of some of Stam’s compression functions proposed at CRYPTO ’08. We show that when one of the n-bit to n-bit components of the proposed 2n-bit to n-bit compression function is replaced by a fixed-key cipher in the Davies-Meyer mode, the complexity of finding a preimage would be 2 n/3. We also show that the complexity of finding a collision in a variant of the 3n-bits to 2n-bits scheme with its output truncated to 3n/2 bits is 2 n/2. The complexity of our preimage attack on this hash function is about 2 n . Finally, we present a collision attack on a variant of the proposed m + s-bit to s-bit scheme, truncated to s − 1 bits, with a complexity of O(1). However, none of our results compromise Stam’s security claims.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the secondary structure of RNA determined by Watson-Crick pairing without pseudo-knots using Milnor invariants of links. We focus on the first non-trivial invariant, which we call the Heisenber invariant. The Heisenberg invariant, which is an integer, can be interpreted in terms of the Heisenberg group as well as in terms of lattice paths. We show that the Heisenberg invariant gives a lower bound on the number of unpaired bases in an RNA secondary structure. We also show that the Heisenberg invariant can predict allosteric structures for RNA. Namely, if the Heisenberg invariant is large, then there are widely separated local maxima (i.e., allosteric structures) for the number of Watson-Crick pairs found.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bearing capacity factor N-c for axially loaded piles in clays whose cohesion increases linearly with depth has been estimated numerically under undrained (phi=0) condition. The Study follows the lower bound limit analysis in conjunction With finite elements and linear programming. A new formulation is proposed for solving an axisymmetric geotechnical stability problem. The variation of N-c with embedment ratio is obtained for several rates of the increase of soil cohesion with depth; a special case is also examined when the pile base was placed on the stiff clay stratum overlaid by a soft clay layer. It was noticed that the magnitude of N-c reaches almost a constant value for embedment ratio greater than unity. The roughness of the pile base and shaft affects marginally the magnitudes of N-c. The results obtained from the present study are found to compare quite well with the different numerical solutions reported in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the problem of designing reverse channel training sequences for a TDD-MIMO spatial-multiplexing system. Assuming perfect channel state information at the receiver and spatial multiplexing at the transmitter with equal power allocation to them dominant modes of the estimated channel, the pilot is designed to ensure an stimate of the channel which improves the forward link capacity. Using perturbation techniques, a lower bound on the forward link capacity is derived with respect to which the training sequence is optimized. Thus, the reverse channel training sequence makes use of the channel knowledge at the receiver. The performance of orthogonal training sequence with MMSE estimation at the transmitter and the proposed training sequence are compared. Simulation results show a significant improvement in performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.