120 resultados para Branch and bounds
Resumo:
We derive bounds on leptonic double mass insertions of the type delta(l)(i4)delta(l)(4j) in four generational MSSM, using the present limits on l(i) -> l(j) + gamma. Two main features distinguish the rates of these processes in MSSM4 from MSSM3: (a) tan beta is restricted to be very small less than or similar to 3 and (b) the large masses for the fourth generation leptons. In spite of small tan beta, there is an enhancement in amplitudes with LLRR (4 delta(ll)(i4)delta(rr)(4j)) type insertions which pick up the mass of the fourth generation lepton, m(tau'). We find these bounds to be at least two orders of magnitude more stringent than those in MSSM3. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
Localization of underwater acoustic sources is a problem of great interest in the area of ocean acoustics. There exist several algorithms for source localization based on array signal processing.It is of interest to know the theoretical performance limits of these estimators. In this paper we develop expressions for the Cramer-Rao-Bound (CRB) on the variance of direction-of-arrival(DOA) and range-depth estimators of underwater acoustic sources in a shallow range-independent ocean for the case of generalized Gaussian noise. We then study the performance of some of the popular source localization techniques,through simulations, for DOA/range-depth estimation of underwater acoustic sources in shallow ocean by comparing the variance of the estimators with the corresponding CRBs.
Resumo:
We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Dendritic rnicroenvironments defined by dynamic internal cavities of a dendrimer were probed through geometric isomerization of stilbene and azobenzene. A third-generation poly(alkyl aryl ether) dendrimer with hydrophilic exterior and hydrophobic interior was used as a reaction cavity in aqueous medium. The dynamic inner cavity sizes were varied by utilizing alkyl linkers that connect the branch junctures from ethyl to n-pentyl moiety (C(2)G(3)-C(5)G(3)). Dendrimers constituted with n-pentyl linker were found to afford higher solubilities of stilbene and azobenzene. Direct irradiation of trans-stilbene showed that C(5)G(3) and C(4)G(3) dendrimers afforded considerable phenanthrene formation, in addition to cis-stilbene, whereas C(3)G(3) and C(2)G(3) gave only cis-stilbene. An electron-transfer sensitized trans-cis isomerization, using cresyl violet perchlorate as the sensitizer, also led to similar results. Thermal isomerization of cis-azobenzene to trans-azobenzene within dendritic microenvironments revealed that the activation energy of the cis- to trans-isomer was increasing in the series C(5)G(3) < C(4)G(3) < C(3)G(3)
Resumo:
In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.
Resumo:
The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V (G) and edge set E(G). A k-dimensional box is a Cartesian product of closed intervals [a(1), b(1)] x [a(2), b(2)] x ... x [a(k), b(k)]. The boxicity of G, box(G), is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes; i.e., each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset, where S is the ground set and P is a reflexive, antisymmetric and transitive binary relation on S. The dimension of P, dim(P), is the minimum integer t such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P; i.e., S is the vertex set and two vertices are adjacent if and only if they are comparable in P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. It immediately follows that if P is a height-2 poset, then box(G(P)) <= dim(P) <= 2box(G(P)) since the underlying comparability graph of a height-2 poset is a bipartite graph. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with the extended double cover of G, denoted as G(c): Note that G(c) is a bipartite graph with partite sets A and B which are copies of V (G) such that, corresponding to every u is an element of V (G), there are two vertices u(A) is an element of A and u(B) is an element of B and {u(A), v(B)} is an edge in G(c) if and only if either u = v or u is adjacent to v in G. Let P(c) be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P(c)) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension such as dim(P) = 2 tree width (G(P)) + 4, since boxicity of any graph is known to be at most its tree width + 2. In the other direction, using the already known bounds for partial order dimension we get the following: (1) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta), which is an improvement over the best-known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-is an element of)) for any is an element of > 0 unless NP = ZPP.
Resumo:
Atmospheric chemistry is a branch of atmospheric science where major focus is the composition of the Earth's atmosphere. Knowledge of atmospheric composition is essential due to its interaction with (solar and terrestrial) radiation and interactions of atmospheric species (gaseous and particulate matter) with living organisms. Since atmospheric chemistry covers a vast range of topics, in this article the focus is on the chemistry of atmospheric aerosols with special emphasis on the Indian region. I present a review of the current state of knowledge of aerosol chemistry in India and propose future directions.
Resumo:
Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.