972 resultados para Strictly hyperbolic polynomial
Resumo:
We consider a dense ad hoc wireless network comprising n nodes confined to a given two dimensional region of fixed area. For the Gupta-Kumar random traffic model and a realistic interference and path loss model (i.e., the channel power gains are bounded above, and are bounded below by a strictly positive number), we study the scaling of the aggregate end-to-end throughput with respect to the network average power constraint, P macr, and the number of nodes, n. The network power constraint P macr is related to the per node power constraint, P macr, as P macr = np. For large P, we show that the throughput saturates as Theta(log(P macr)), irrespective of the number of nodes in the network. For moderate P, which can accommodate spatial reuse to improve end-to-end throughput, we observe that the amount of spatial reuse feasible in the network is limited by the diameter of the network. In fact, we observe that the end-to-end path loss in the network and the amount of spatial reuse feasible in the network are inversely proportional. This puts a restriction on the gains achievable using the cooperative communication techniques studied in and, as these rely on direct long distance communication over the network.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
To realistically simulate the motion of flexible objects such as ropes, strings, snakes, or human hair,one strategy is to discretise the object into a large number of small rigid links connected by rotary or spherical joints. The discretised system is highly redundant and the rotations at the joints (or the motion of the other links) for a desired Cartesian motion of the end of a link cannot be solved uniquely. In this paper, we propose a novel strategy to resolve the redundancy in such hyper-redundant systems.We make use of the classical tractrix curve and its attractive features. For a desired Cartesian motion of the `head'of a link, the `tail' of the link is moved according to a tractrix,and recursively all links of the discretised objects are moved along different tractrix curves. We show that the use of a tractrix curve leads to a more `natural' motion of the entire object since the motion is distributed uniformly along the entire object with the displacements tending to diminish from the `head' to the `tail'. We also show that the computation of the motion of the links can be done in real time since it involves evaluation of simple algebraic, trigonometric and hyperbolic functions. The strategy is illustrated by simulations of a snake, tying of knots with a rope and a solution of the inverse kinematics of a planar hyper-redundant manipulator.
Resumo:
We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in $1.5 (\Delta + 2) \ln n$ dimensions, where $\Delta$ is the maximum degree of G. We also show that $\boxi(G) \le (\Delta + 2) \ln n$ for any graph G. Our bound is tight up to a factor of $\ln n$. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree $\Delta$, we show that for almost all graphs on n vertices, its boxicity is upper bound by $c\cdot(d_{av} + 1) \ln n$ where d_{av} is the average degree and c is a small constant. Also, we show that for any graph G, $\boxi(G) \le \sqrt{8 n d_{av} \ln n}$, which is tight up to a factor of $b \sqrt{\ln n}$ for a constant b.
Resumo:
We first review a general formulation of ray theory and write down the conservation forms of the equations of a weakly nonlinear ray theory (WNLRT) and a shock ray theory (SRT) for a weak shock in a polytropic gas. Then we present a formulation of the problem of sonic boom by a maneuvering aerofoil as a one parameter family of Cauchy problems. The system of equations in conservation form is hyperbolic for a range of values of the parameter and has elliptic nature else where, showing that unlike the leading shock, the trailing shock is always smooth.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%
Resumo:
Measurements a/the Gibbs' energy enthalpy and entrupy vffarmation oj chromites, vanadites and alumlnat.:s 0/ F", Ni. Co'. Mn, Zn Mg and Cd, using solid oxide galvanic cells over a ternperature range extending approximately lOOO°C, have shown that the '~'Ilir"!,,, J'JrIl/iJ~ tion 0/ cubic 2-3 oxide spinel phases (MX!O,), from component oxide (MO) with rock-salt and X.Os whir c(1f'l/!ldwn st!'llt'lw,·. call b,' represented by a semi-empirical correlalion, ~S~ = --LiS + L'i,SM +~S~:"d(±O.3) cal.deg-1 mol-1 where /',.SM Is the entropy 0/calian mixing oillhe tetrahedral alld octahedral sites o/the spinel and Sr:~ is tlie enfropy associaf,'d Wifh Ih,' randomization a/the lahn-Telier distortions. A review a/the methods/or evaluating the cation distriblltion lfl spille!s suggeJ{j' l/r,l! Ihe most promising scheme is based Oil octahedral site preference energies from the crystal field theory for the Iral1silioll IIIl'f"! IlIIL';. For I/""-Irallsifioll melal cal ions site preference energies are derived relative /0 thol'lt fLI, [ransilion metal ions from measured high tClllP('ftJi ure Cal iUlI disll iiJuriol1 in spine! phases thar contail! one IransilioJl metal and another non-transition metal carion. For 2-3 srinds compulatiorrs b,IS"J Oil i.!c[J;' Temkin mixing on each catioll subialtice predici JistributionJ that are In fair agreement with X-ray and 1I1'IIIrOll ditTraction, /IIdg""!ic dll.! electrical propcrries, and spectroscopic measurements. In 2-4 spineis mixing vI ions do not foliow strictly ideal slllIistli:al Jaws, Th,' OIl/up) associated with the randomizalion 0/the Jllhn-Teller dislOriioll" appear to be significant, only ill spinels witll 3d'. 3d', 3d' (ifld~UI' iOtls in tetrahedral and 3d' and 3d9 ions in octahedral positions. Application 0/this structural model for predicting the thermodynamic proputies ofspinel solid .,olutiofl5 or,' illustrated. F,lr complex systems additional contributions arising from strain fields, redox equilibria and off-center ions have to be qllalllififti. The entropy correlation for spinels provides a method for evaluating structure tran:.jormafiofl entropies in silllple o.\id.-s, ["founlllion on the relative stabilities ofoxides in different crystallCtructures is USe/III for computer ea/culaliof! a/phase dfugrullls ofIlIrer,',,1 III (N.lll1ie5 by method, similar to thost: used by Kaufman and Bernstein for refractory alloy systems. Examples oftechnoiogical appliCation tnclude the predictioll ofdeoxidation equilibria in Fe-Mn-AI-O s),slelll at 1600°C duj ,'Ulllpltfalion 0/phase relutions in Fe-Ni-Cr-S system,
Resumo:
In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.
Resumo:
The Generalized Distributive Law (GDL) is a message passing algorithm which can efficiently solve a certain class of computational problems, and includes as special cases the Viterbi's algorithm, the BCJR algorithm, the Fast-Fourier Transform, Turbo and LDPC decoding algorithms. In this paper GDL based maximum-likelihood (ML) decoding of Space-Time Block Codes (STBCs) is introduced and a sufficient condition for an STBC to admit low GDL decoding complexity is given. Fast-decoding and multigroup decoding are the two algorithms used in the literature to ML decode STBCs with low complexity. An algorithm which exploits the advantages of both these two is called Conditional ML (CML) decoding. It is shown in this paper that the GDL decoding complexity of any STBC is upper bounded by its CML decoding complexity, and that there exist codes for which the GDL complexity is strictly less than the CML complexity. Explicit examples of two such families of STBCs is given in this paper. Thus the CML is in general suboptimal in reducing the ML decoding complexity of a code, and one should design codes with low GDL complexity rather than low CML complexity.
Resumo:
In this article we review classical and modern Galois theory with historical evolution and prove a criterion of Galois for solvability of an irreducible separable polynomial of prime degree over an arbitrary field k and give many illustrative examples.
Resumo:
A novel procedure to determine the series capacitance of a transformer winding, based on frequency-response measurements, is reported. It is based on converting the measured driving-point impedance magnitude response into a rational function and thereafter exploiting the ratio of a specific coefficient in the numerator and denominator polynomial, which leads to the direct estimation of series capacitance. The theoretical formulations are derived for a mutually coupled ladder-network model, followed by sample calculations. The results obtained are accurate and its feasibility is demonstrated by experiments on model-coil and on actual, single, isolated transformer windings (layered, continuous disc, and interleaved disc). The authors believe that the proposed method is the closest one can get to indirectly measuring series capacitance.
Resumo:
The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.
Resumo:
Nonlinear analysis of batter piles in soft clay is performed using the finite element technique. As the batter piles are not only governed by lateral load but also axial load, the effect of P- Delta moment and geometric stiffness matrix is included in the analysis. For implementing the nonlinear soil behavior, reduction in soil strength (degradation), and formation of gap with number of load cycles, a numerical model is developed where a hyperbolic relation is adopted for the soil in static condition and hyperbolic relation considering degradation and gap for cyclic load condition. The numerical model is validated with published experimental results for cyclic lateral loading and the hysteresis loops are developed to predict the load-deflection behavior and soil resistance behavior during consecutive cycles of loading. This paper highlights the importance of a rigorous degradation model for subsequent cycles of loading on the pile-soil system by a hysteretic representation.