943 resultados para diagonal constrained decorrelation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conformational energy calculations on the model system N-acetyl- 1 -aminocyclohexanecarboxylic acid N'methylamide (Ac-Acc6-NHMe), using an average geometry derived from 13 crystallographic observations, establish that the Acc6 residue is constrained to adopt conformations in the helical regions of In contrast, the a,a-dialkylated residue with linear hydrocarbon side chains, a,a-di-n-propylglycine favors fully extended backbone structures (6 1= $ = 180'). The crystal structures of two model peptides, Boc-(Acc6),-OMe (type 111 @-turn at -Acc6(1)-Acc6(2)-) and Boc-Pro-Acc6-Ala-OMe (type I1 P-turn at -Pro-Acc6-), establish that Acc6 residues can occupy either position of type 111 P-turns and the i + 2 position of type I1 @-turns. The stereochemical rigidity of these peptides is demonstrated in solution by NMR studies, which establish the presence of one intramolecular hydrogen bond in each peptide in CDCI, and (CDJ2S0. Nuclear Overhauser effects permit characterization of the @-turn conformations in solution and establish their similarity to the solid-state structures. The implications for the use of Acc6 residues in conformational design are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of DNA sequences to adopt unusual structures under the superhelical torsional stress has been studied. Sequences that are forced to adopt unusual conformation in topologically constrained pBR322 form V DNA (Lk=0) were mapped using restriction enzymes as probes. Restriction enzymes such as BamHI, Pstl, Aval and HindIII could not cleave their recognition sequences. The removal of topological constraint relieved this inhibition. The influence of neighbouring sequences on the ability of a given sequence to adopt unusual DNA structure, presumably left handed Z conformation, was studied through single hit analysis. Using multiple cut restriction enzymes such as Narl and Fspl, it could be shown that under identical topological strain, the extent of structural alteration is greatly influenced by the neighbouring sequences. In the light of the variety of sequences and locations that could be mapped to adopt non-6 conformation in pBR322 form V DNA, restriction enzymes appear as potential structural probes for natural DNA sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of folded structures in peptides containing the higher homologues of alpha-amino acid residues requires the restriction of the range of local conformational choices In alpha-amino acids stereochemically constrained residues like alpha,alpha-dialkylated residue, aminoisobutyric acid (Aib), and D-Proline ((D)Pro) have proved extremely useful in the design of helices and hairpins in short peptides Extending this approach, backbone substitution and cyclization are anticipated to bc useful in generating conformationally constrained beta- and gamma-residues This brief review provides a survey of work on hybrid peptide sequences concerning the conformationally constrained gamma-amino acid residue 1-aminomethyl cyclohexane acetic acid, gabapentin (Gpn) This achiral, beta,beta-disubstituted, gamma-residue strongly favors gauche-gauche conformations about the C-alpha-C-beta (0(2)) and C-alpha-C-gamma (0(1)) bonds, facilitating local folding The Gpn residue can adopt both C-7 (NH1 -> CO1) and C-9 (CO1 (I)<- NH1+I) hydrogen bonds which are analogous to the C-5 and C7 (gamma-turn) conformations at alpha-residues In conjunction with adjacent residues, Gpn may be used in ay and gamma alpha segments to generate C-12 hydrogen bonded conformations which may be considered as expanded analogs of conventional beta-turns The structural characterization of C-12 helices, C-12/C-10 helices with mixed hydrogen bond directionalities and beta-hairpins incorporating Gpn residues at the turn segment is illustrated (C) 2010 Wiley Periodicals, Inc Biopolymers (Pept Sci) 94 733-741 2010

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose F-norm of the cross-correlation part of the array covariance matrix as a measure of correlation between the impinging signals and study the performance of different decorrelation methods in the broadband case using this measure. We first show that dimensionality of the composite signal subspace, defined as the number of significant eigenvectors of the source sample covariance matrix, collapses in the presence of multipath and the spatial smoothing recovers this dimensionality. Using an upper bound on the proposed measure, we then study the decorrelation of the broadband signals with spatial smoothing and the effect of spacing and directions of the sources on the rate of decorrelation with progressive smoothing. Next, we introduce a weighted smoothing method based on Toeplitz-block-Toeplitz (TBT) structuring of the data covariance matrix which decorrelates the signals much faster than the spatial smoothing. Computer simulations are included to demonstrate the performance of the two methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimizing a shell and tube heat exchanger for a given duty is an important and relatively difficult task. There is a need for a simple, general and reliable method for realizing this task. The authors present here one such method for optimizing single phase shell-and-tube heat exchangers with given geometric and thermohydraulic constraints. They discuss the problem in detail. Then they introduce a basic algorithm for optimizing the exchanger. This algorithm is based on data from an earlier study of a large collection of feasible designs generated for different process specifications. The algorithm ensures a near-optimal design satisfying the given heat duty and geometric constraints. The authors also provide several sub-algorithms to satisfy imposed velocity limitations. They illustrate how useful these sub-algorithms are with several examples where the exchanger weight is minimized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical study of the ductile rupture in a metal foil constrained between two stiff ceramic blocks is performed. The finite element analysis is carried out under the conditions of mode I, plane strain, small-scale yielding. The rate-independent version of the Gurson model that accounts for the ductile failure mechanisms of microvoid nucleation, growth and coalescence is employed to represent the behavior of the metal foil. Different distributions of void nucleating sites in the metal foil are considered for triggering the initiation of discrete voids. The results clearly show that far-field triaxiality-induced cavitation is the dominant failure mode when the spacing of the void nucleating sites is large. On the contrary, void coalescence near the notch tip is found to be the operative failure mechanism when closely spaced void nucleating sites are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study the problem of wireless sensor network design by deploying a minimum number of additional relay nodes (to minimize network design cost) at a subset of given potential relay locationsin order to convey the data from already existing sensor nodes (hereafter called source nodes) to a Base Station within a certain specified mean delay bound. We formulate this problem in two different ways, and show that the problem is NP-Hard. For a problem in which the number of existing sensor nodes and potential relay locations is n, we propose an O(n) approximation algorithm of polynomial time complexity. Results show that the algorithm performs efficiently (in over 90% of the tested scenarios, it gave solutions that were either optimal or exceeding optimal just by one relay) in various randomly generated network scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wireless Energy Harvesting Sensor (EHS) needs to send data packets arriving in its queue over a fading channel at maximum possible throughput while ensuring acceptable packet delays. At the same time, it needs to ensure that energy neutrality is satisfied, i.e., the average energy drawn from a battery should equal the amount of energy deposited in it minus the energy lost due to the inefficiency of the battery. In this work, a framework is developed under which a system designer can optimize the performance of the EHS node using power control based on the current channel state information, when the EHS node employs a single modulation and coding scheme and the channel is Rayleigh fading. Optimal system parameters for throughput optimal, delay optimal and delay-constrained throughput optimal policies that ensure energy neutrality are derived. It is seen that a throughput optimal (maximal) policy is packet delay-unbounded and an average delay optimal (minimal) policy achieves negligibly small throughput. Finally, the influence of the harvested energy profile on the performance of the EHS is illustrated through the example of solar energy harvesting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has caused Negative Bias Temperature Instability (NBTI) to emerge as a major circuit reliability concern. Simultaneously leakage power is becoming a greater fraction of the total power dissipated by logic circuits. As both NBTI and leakage power are highly dependent on vectors applied at the circuit’s inputs, they can be minimized by applying carefully chosen input vectors during periods when the circuit is in standby or idle mode. Unfortunately input vectors that minimize leakage power are not the ones that minimize NBTI degradation, so there is a need for a methodology to generate input vectors that minimize both of these variables.This paper proposes such a systematic methodology for the generation of input vectors which minimize leakage power under the constraint that NBTI degradation does not exceed a specified limit. These input vectors can be applied at the primary inputs of a circuit when it is in standby/idle mode and are such that the gates dissipate only a small amount of leakage power and also allow a large majority of the transistors on critical paths to be in the “recovery” phase of NBTI degradation. The advantage of this methodology is that allowing circuit designers to constrain NBTI degradation to below a specified limit enables tighter guardbanding, increasing performance. Our methodology guarantees that the generated input vector dissipates the least leakage power among all the input vectors that satisfy the degradation constraint. We formulate the problem as a zero-one integer linear program and show that this formulation produces input vectors whose leakage power is within 1% of a minimum leakage vector selected by a search algorithm and simultaneously reduces NBTI by about 5.75% of maximum circuit delay as compared to the worst case NBTI degradation. Our paper also proposes two new algorithms for the identification of circuit paths that are affected the most by NBTI degradation. The number of such paths identified by our algorithms are an order of magnitude fewer than previously proposed heuristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study how TCP and UDP flows interact with each other when the end system is a CPU resource constrained thin client. The problem addressed is twofold, 1) the throughput of TCP flows degrades severely in the presence of heavily loaded UDP flows 2) fairness and minimum QoS requirements of UDP are not maintained. First, we identify the factors affecting the TCP throughput by providing an in-depth analysis of end to end delay and packet loss variations. The results obtained from the first part leads us to our second contribution. We propose and study the use of an algorithm that ensures fairness across flows. The algorithm improves the performance of TCP flows in the presence of multiple UDP flows admitted under an admission algorithm and maintains the minimum QoS requirements of the UDP flows. The advantage of the algorithm is that it requires no changes to TCP/IP stack and control is achieved through receiver window control.