943 resultados para Branch and bounds
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
We study the trade-off between delivery delay and energy consumption in delay tolerant mobile wireless networks that use two-hop relaying. The source may not have perfect knowledge of the delivery status at every instant. We formulate the problem as a stochastic control problem with partial information, and study structural properties of the optimal policy. We also propose a simple suboptimal policy. We then compare the performance of the suboptimal policy against that of the optimal control with perfect information. These are bounds on the performance of the proposed policy with partial information. Several other related open loop policies are also compared with these bounds.
Resumo:
Given two independent Poisson point processes ©(1);©(2) in Rd, the AB Poisson Boolean model is the graph with points of ©(1) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centred at these points contains at least one point of ©(2). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d ¸ 2 and derive bounds for a critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and cn in the unit cube. The AB random geometric graph is de¯ned as above but with balls of radius r. We derive a weak law result for the largest nearest neighbour distance and almost sure asymptotic bounds for the connectivity threshold.
Resumo:
We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP)networks and layered networks.KPP networks, can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4.For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. We also give alternative and often simpler proofs of several existing results and show that codes achieving full diversity on a MIMO Rayleigh fading channel achieve full diversity on arbitrary fading channels. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT
Resumo:
Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].
Resumo:
We derive bounds on leptonic double mass insertions of the type delta(l)(i4)delta(l)(4j) in four generational MSSM, using the present limits on l(i) -> l(j) + gamma. Two main features distinguish the rates of these processes in MSSM4 from MSSM3: (a) tan beta is restricted to be very small less than or similar to 3 and (b) the large masses for the fourth generation leptons. In spite of small tan beta, there is an enhancement in amplitudes with LLRR (4 delta(ll)(i4)delta(rr)(4j)) type insertions which pick up the mass of the fourth generation lepton, m(tau'). We find these bounds to be at least two orders of magnitude more stringent than those in MSSM3. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
Localization of underwater acoustic sources is a problem of great interest in the area of ocean acoustics. There exist several algorithms for source localization based on array signal processing.It is of interest to know the theoretical performance limits of these estimators. In this paper we develop expressions for the Cramer-Rao-Bound (CRB) on the variance of direction-of-arrival(DOA) and range-depth estimators of underwater acoustic sources in a shallow range-independent ocean for the case of generalized Gaussian noise. We then study the performance of some of the popular source localization techniques,through simulations, for DOA/range-depth estimation of underwater acoustic sources in shallow ocean by comparing the variance of the estimators with the corresponding CRBs.
Resumo:
We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.