904 resultados para Lipschitzian bounds
Resumo:
The boxicity of a graph H, denoted by box(H), is the minimum integer k such that H is an intersection graph of axis-parallel k-dimensional boxes in R(k). In this paper we show that for a line graph G of a multigraph, box(G) <= 2 Delta (G)(inverted right perpendicularlog(2) log(2) Delta(G)inverted left perpendicular + 3) + 1, where Delta(G) denotes the maximum degree of G. Since G is a line graph, Delta(G) <= 2(chi (G) - 1), where chi (G) denotes the chromatic number of G, and therefore, box(G) = 0(chi (G) log(2) log(2) (chi (G))). For the d-dimensional hypercube Q(d), we prove that box(Q(d)) >= 1/2 (inverted right perpendicularlog(2) log(2) dinverted left perpendicular + 1). The question of finding a nontrivial lower bound for box(Q(d)) was left open by Chandran and Sivadasan in [L. Sunil Chandran, Naveen Sivadasan, The cubicity of Hypercube Graphs. Discrete Mathematics 308 (23) (2008) 5795-5800]. The above results are consequences of bounds that we obtain for the boxicity of a fully subdivided graph (a graph that can be obtained by subdividing every edge of a graph exactly once). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A k-dimensional box is a Cartesian product R(1)x...xR(k) where each R(i) is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. That is, two vertices are adjacent if and only if their corresponding boxes intersect. A circular arc graph is a graph that can be represented as the intersection graph of arcs on a circle. We show that if G is a circular arc graph which admits a circular arc representation in which no arc has length at least pi(alpha-1/alpha) for some alpha is an element of N(>= 2), then box(G) <= alpha (Here the arcs are considered with respect to a unit circle). From this result we show that if G has maximum degree Delta < [n(alpha-1)/2 alpha] for some alpha is an element of N(>= 2), then box(G) <= alpha. We also demonstrate a graph having box(G) > alpha but with Delta = n (alpha-1)/2 alpha + n/2 alpha(alpha+1) + (alpha+2). For a proper circular arc graph G, we show that if Delta < [n(alpha-1)/alpha] for some alpha is an element of N(>= 2), then box(G) <= alpha. Let r be the cardinality of the minimum overlap set, i.e. the minimum number of arcs passing through any point on the circle, with respect to some circular arc representation of G. We show that for any circular arc graph G, box(G) <= r + 1 and this bound is tight. We show that if G admits a circular arc representation in which no family of k <= 3 arcs covers the circle, then box(G) <= 3 and if G admits a circular arc representation in which no family of k <= 4 arcs covers the circle, then box(G) <= 2. We also show that both these bounds are tight.
Resumo:
This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.
Resumo:
To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
A methodology is presented for the synthesis of analog circuits using piecewise linear (PWL) approximations. The function to be synthesized is divided into PWL segments such that each segment can be realized using elementary MOS current-mode programmable-gain circuits. A number of these elementary current-mode circuits when connected in parallel, it is possible to realize piecewise linear approximation of any arbitrary analog function with in the allowed approximation error bounds. Simulation results show a close agreement between the desired function and the synthesized output. The number of PWL segments used for approximation and hence the circuit area is determined by the required accuracy and the smoothness of the resulting function.
Resumo:
In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant and common decoder. In particular, we derive inner and outer bounds on the rate region for the random field to be estimated with a given mean distortion.
Resumo:
We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.
Resumo:
In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.
Resumo:
Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.
Resumo:
Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.
Resumo:
The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.