935 resultados para gap, minproblem, algoritmi, esatti, lower, bound, posta
Resumo:
Cold-formed high strength steel members are increasingly used as primary load bearing components in low rise buildings. Lipped channel beam (LCB) is one of the most commonly used flexural members in these applications. In this research an experimental study was undertaken to investigate the shear behaviour and strengths of LCB sections. Simply supported test specimens of back to back LCBs with aspect ratios of 1.0 and 1.5 were loaded at mid-span until failure. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. The ultimate shear capacity results obtained from the tests were compared with the predictions from the current design rules in Australian/NewZealand and American cold-formed steel design standards. This comparison showed that these shear design rules are very conservative as they did not include the post-buckling strength observed in the shear tests and the higher shear buckling coefficient due to the additional fixity along the web-flange juncture. Improved shear design equations are proposed in this paper by including the above beneficial effects. Suitable lower bound design rules were also developed under the direct strength method format. This paper presents the details of this experimental study and the results including the improved design rules for the shear capacity of LCBs. It also includes the details of tests of LCBs subject to combined shear and flange distortion, and combined bending and shear actions, and proposes suitable design rules to predict the capacities in these cases.
Resumo:
PURPOSE: The prevalence of anaplastic lymphoma kinase (ALK) gene fusion (ALK positivity) in early-stage non-small-cell lung cancer (NSCLC) varies by population examined and detection method used. The Lungscape ALK project was designed to address the prevalence and prognostic impact of ALK positivity in resected lung adenocarcinoma in a primarily European population. METHODS: Analysis of ALK status was performed by immunohistochemistry (IHC) and fluorescent in situ hybridization (FISH) in tissue sections of 1,281 patients with adenocarcinoma in the European Thoracic Oncology Platform Lungscape iBiobank. Positive patients were matched with negative patients in a 1:2 ratio, both for IHC and for FISH testing. Testing was performed in 16 participating centers, using the same protocol after passing external quality assessment. RESULTS: Positive ALK IHC staining was present in 80 patients (prevalence of 6.2%; 95% CI, 4.9% to 7.6%). Of these, 28 patients were ALK FISH positive, corresponding to a lower bound for the prevalence of FISH positivity of 2.2%. FISH specificity was 100%, and FISH sensitivity was 35.0% (95% CI, 24.7% to 46.5%), with a sensitivity value of 81.3% (95% CI, 63.6% to 92.8%) for IHC 2+/3+ patients. The hazard of death for FISH-positive patients was lower than for IHC-negative patients (P = .022). Multivariable models, adjusted for patient, tumor, and treatment characteristics, and matched cohort analysis confirmed that ALK FISH positivity is a predictor for better overall survival (OS). CONCLUSION: In this large cohort of surgically resected lung adenocarcinomas, the prevalence of ALK positivity was 6.2% using IHC and at least 2.2% using FISH. A screening strategy based on IHC or H-score could be envisaged. ALK positivity (by either IHC or FISH) was related to better OS.
Resumo:
In this paper we present concrete collision and preimage attacks on a large class of compression function constructions making two calls to the underlying ideal primitives. The complexity of the collision attack is above the theoretical lower bound for constructions of this type, but below the birthday complexity; the complexity of the preimage attack, however, is equal to the theoretical lower bound. We also present undesirable properties of some of Stam’s compression functions proposed at CRYPTO ’08. We show that when one of the n-bit to n-bit components of the proposed 2n-bit to n-bit compression function is replaced by a fixed-key cipher in the Davies-Meyer mode, the complexity of finding a preimage would be 2 n/3. We also show that the complexity of finding a collision in a variant of the 3n-bits to 2n-bits scheme with its output truncated to 3n/2 bits is 2 n/2. The complexity of our preimage attack on this hash function is about 2 n . Finally, we present a collision attack on a variant of the proposed m + s-bit to s-bit scheme, truncated to s − 1 bits, with a complexity of O(1). However, none of our results compromise Stam’s security claims.
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
We study the secondary structure of RNA determined by Watson-Crick pairing without pseudo-knots using Milnor invariants of links. We focus on the first non-trivial invariant, which we call the Heisenber invariant. The Heisenberg invariant, which is an integer, can be interpreted in terms of the Heisenberg group as well as in terms of lattice paths. We show that the Heisenberg invariant gives a lower bound on the number of unpaired bases in an RNA secondary structure. We also show that the Heisenberg invariant can predict allosteric structures for RNA. Namely, if the Heisenberg invariant is large, then there are widely separated local maxima (i.e., allosteric structures) for the number of Watson-Crick pairs found.
Resumo:
Bearing capacity factor N-c for axially loaded piles in clays whose cohesion increases linearly with depth has been estimated numerically under undrained (phi=0) condition. The Study follows the lower bound limit analysis in conjunction With finite elements and linear programming. A new formulation is proposed for solving an axisymmetric geotechnical stability problem. The variation of N-c with embedment ratio is obtained for several rates of the increase of soil cohesion with depth; a special case is also examined when the pile base was placed on the stiff clay stratum overlaid by a soft clay layer. It was noticed that the magnitude of N-c reaches almost a constant value for embedment ratio greater than unity. The roughness of the pile base and shaft affects marginally the magnitudes of N-c. The results obtained from the present study are found to compare quite well with the different numerical solutions reported in the literature.
Resumo:
This paper investigates the problem of designing reverse channel training sequences for a TDD-MIMO spatial-multiplexing system. Assuming perfect channel state information at the receiver and spatial multiplexing at the transmitter with equal power allocation to them dominant modes of the estimated channel, the pilot is designed to ensure an stimate of the channel which improves the forward link capacity. Using perturbation techniques, a lower bound on the forward link capacity is derived with respect to which the training sequence is optimized. Thus, the reverse channel training sequence makes use of the channel knowledge at the receiver. The performance of orthogonal training sequence with MMSE estimation at the transmitter and the proposed training sequence are compared. Simulation results show a significant improvement in performance.
Resumo:
Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.
Resumo:
In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.
Resumo:
An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Suclakov and Zaks (and earlier by Fiamcik) that a'(G) <= Delta+2, where Delta = Delta(G) denotes the maximum degree of the graph. Alon et al. also raised the question whether the complete graphs of even order are the only regular graphs which require Delta+2 colors to be acyclically edge colored. In this article, using a simple counting argument we observe not only that this is not true, but in fact all d-regular graphs with 2n vertices and d>n, requires at least d+2 colors. We also show that a'(K-n,K-n) >= n+2, when n is odd using a more non-trivial argument. (Here K-n,K-n denotes the complete bipartite graph with n vertices on each side.) This lower bound for Kn,n can be shown to be tight for some families of complete bipartite graphs and for small values of n. We also infer that for every d, n such that d >= 5, n >= 2d+3 and dn even, there exist d-regular graphs which require at least d+2-colors to be acyclically edge colored. (C) 2009 Wiley Periodicals, Inc. J Graph Theory 63: 226-230, 2010.
Resumo:
The Hadwiger number eta(G) of a graph G is the largest integer n for which the complete graph K-n on n vertices is a minor of G. Hadwiger conjectured that for every graph G, eta(G) >= chi(G), where chi(G) is the chromatic number of G. In this paper, we study the Hadwiger number of the Cartesian product G square H of graphs. As the main result of this paper, we prove that eta(G(1) square G(2)) >= h root 1 (1 - o(1)) for any two graphs G(1) and G(2) with eta(G(1)) = h and eta(G(2)) = l. We show that the above lower bound is asymptotically best possible when h >= l. This asymptotically settles a question of Z. Miller (1978). As consequences of our main result, we show the following: 1. Let G be a connected graph. Let G = G(1) square G(2) square ... square G(k) be the ( unique) prime factorization of G. Then G satisfies Hadwiger's conjecture if k >= 2 log log chi(G) + c', where c' is a constant. This improves the 2 log chi(G) + 3 bound in [2] 2. Let G(1) and G(2) be two graphs such that chi(G1) >= chi(G2) >= clog(1.5)(chi(G(1))), where c is a constant. Then G1 square G2 satisfies Hadwiger's conjecture. 3. Hadwiger's conjecture is true for G(d) (Cartesian product of G taken d times) for every graph G and every d >= 2. This settles a question by Chandran and Sivadasan [2]. ( They had shown that the Hadiwger's conjecture is true for G(d) if d >= 3).
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.