227 resultados para Prove
Resumo:
The aim of this paper is to develop a computationally efficient decentralized rendezvous algorithm for a group of autonomous agents. The algorithm generalizes the notion of sensor domain and decision domain of agents to enable implementation of simple computational algorithms. Specifically, the algorithm proposed in this paper uses a rectilinear decision domain (RDD) as against the circular decision domain assumed in earlier work. Because of this, the computational complexity of the algorithm reduces considerably and, when compared to the standard Ando's algorithm available in the literature, the RDD algorithm shows very significant improvement in convergence time performance. Analytical results to prove convergence and supporting simulation results are presented in the paper.
Resumo:
We define lacunary Fourier series on a compact connected semisimple Lie group G. If f is an element of L-1 (G) has lacunary Fourier series and f vanishes on a non empty open subset of G, then we prove that f vanishes identically. This result can be viewed as a qualitative uncertainty principle.
Resumo:
In this article we consider a semigroup ring R = KGamma] of a numerical semigroup Gamma and study the Cohen- Macaulayness of the associated graded ring G(Gamma) := gr(m), (R) := circle plus(n is an element of N) m(n)/m(n+1) and the behaviour of the Hilbert function H-R of R. We define a certain (finite) subset B(Gamma) subset of F and prove that G(Gamma) is Cohen-Macaulay if and only if B(Gamma) = empty set. Therefore the subset B(Gamma) is called the Cohen-Macaulay defect of G(Gamma). Further, we prove that if the degree sequence of elements of the standard basis of is non-decreasing, then B(F) = empty set and hence G(Gamma) is Cohen-Macaulay. We consider a class of numerical semigroups Gamma = Sigma(3)(i=0) Nm(i) generated by 4 elements m(0), m(1), m(2), m(3) such that m(1) + m(2) = mo m3-so called ``balanced semigroups''. We study the structure of the Cohen-Macaulay defect B(Gamma) of Gamma and particularly we give an estimate on the cardinality |B(Gamma, r)| for every r is an element of N. We use these estimates to prove that the Hilbert function of R is non-decreasing. Further, we prove that every balanced ``unitary'' semigroup Gamma is ``2-good'' and is not ``1-good'', in particular, in this case, c(r) is not Cohen-Macaulay. We consider a certain special subclass of balanced semigroups Gamma. For this subclass we try to determine the Cohen-Macaulay defect B(Gamma) using the explicit description of the standard basis of Gamma; in particular, we prove that these balanced semigroups are 2-good and determine when exactly G(Gamma) is Cohen-Macaulay. (C) 2011 Published by Elsevier B.V.
Resumo:
Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.
Resumo:
Tutte (1979) proved that the disconnected spanning subgraphs of a graph can be reconstructed from its vertex deck. This result is used to prove that if we can reconstruct a set of connected graphs from the shuffled edge deck (SED) then the vertex reconstruction conjecture is true. It is proved that a set of connected graphs can be reconstructed from the SED when all the graphs in the set are claw-free or all are P-4-free. Such a problem is also solved for a large subclass of the class of chordal graphs. This subclass contains maximal outerplanar graphs. Finally, two new conjectures, which imply the edge reconstruction conjecture, are presented. Conjecture 1 demands a construction of a stronger k-edge hypomorphism (to be defined later) from the edge hypomorphism. It is well known that the Nash-Williams' theorem applies to a variety of structures. To prove Conjecture 2, we need to incorporate more graph theoretic information in the Nash-Williams' theorem.
Resumo:
The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Combining the principles of dynamic inversion and optimization theory, a new approach is presented for stable control of a class of one-dimensional nonlinear distributed parameter systems, assuming the availability a continuous actuator in the spatial domain. Unlike the existing approximate-then-design and design-then-approximate techniques, here there is no need of any approximation either of the system dynamics or of the resulting controller. Rather, the control synthesis approach is fairly straight-forward and simple. The controller formulation has more elegance because we can prove the convergence of the controller to its steady state value. To demonstrate the potential of the proposed technique, a real-life temperature control problem for a heat transfer application is solved. It has been demonstrated that a desired temperature profile can be achieved starting from any arbitrary initial temperature profile.
Resumo:
CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.
Resumo:
We prove that CdS nanocrystals can be thermodynamically stabilized in both wurtzite and zinc-blende crystallographic phases at will, just by the proper choice of the capping ligand. As a striking demonstration of this, the largest CdS nanocrystals (similar to 15 nm diameter) ever formed with the zinc-blende structure have been synthesized at a high reaction temperature of 310 degrees C, in contrast to previous reports suggesting the formation of zinc-blende CdS only in the small size limit (< 4.5 nm) or at a lower reaction temperature (<= 240 degrees C). Theoretical analysis establishes that the binding energy of trioctylphosphine molecules on the (001) surface of zinc-blende CdS is significantly larger than that for any of the wurtzite planes. Consequently, trioctylphosphine as a capping agent stabilizes the zinc-blende phase via influencing the surface energy that plays an important role in the overall energetics of a nanocrystal. Besides achieving giant zinc-blende CdS nanocrystals, this new understanding allows us to prepare CdSe and CdSe/CdS core/shell nanocrystals in the zinc-blende structure.
Resumo:
Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.
Resumo:
In contrast to earlier observations on various solitary wave propagations, especially those bifurcated by the compressive and rarefactive solitary waves, the existence of spiky and explosive solitary waves is here believed to arise because of the presence of free and trapped electrons. So far, very few studies have been carried out to satisfactorily explain the presence of the solitary waves in space as observed by satellites. It is also attempted to highlight the probable impact on the various solitary wave propagations in a generalized multi-component, inhomogeneous plasma upon consideration of a relativistic treatment. It is expected that such a treatment will prove the existence of the solitary waves most expeditiously and exhibit the presence of chaos therein, thus giving a suitable explanation to the observations of various forms of spiky and explosive solitary waves in space-plasma. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
We build on the formulation developed in S. Sridhar and N. K. Singh J. Fluid Mech. 664, 265 (2010)] and present a theory of the shear dynamo problem for small magnetic and fluid Reynolds numbers, but for arbitrary values of the shear parameter. Specializing to the case of a mean magnetic field that is slowly varying in time, explicit expressions for the transport coefficients alpha(il) and eta(iml) are derived. We prove that when the velocity field is nonhelical, the transport coefficient alpha(il) vanishes. We then consider forced, stochastic dynamics for the incompressible velocity field at low Reynolds number. An exact, explicit solution for the velocity field is derived, and the velocity spectrum tensor is calculated in terms of the Galilean-invariant forcing statistics. We consider forcing statistics that are nonhelical, isotropic, and delta correlated in time, and specialize to the case when the mean field is a function only of the spatial coordinate X-3 and time tau; this reduction is necessary for comparison with the numerical experiments of A. Brandenburg, K. H. Radler, M. Rheinhardt, and P. J. Kapyla Astrophys. J. 676, 740 (2008)]. Explicit expressions are derived for all four components of the magnetic diffusivity tensor eta(ij) (tau). These are used to prove that the shear-current effect cannot be responsible for dynamo action at small Re and Rm, but for all values of the shear parameter.
Resumo:
In this paper, we look at the problem of scheduling expression trees with reusable registers on delayed load architectures. Reusable registers come into the picture when the compiler has a data-flow analyzer which is able to estimate the extent of use of the registers. Earlier work considered the same problem without allowing for register variables. Subsequently, Venugopal considered non-reusable registers in the tree. We further extend these efforts to consider a much more general form of the tree. We describe an approximate algorithm for the problem. We formally prove that the code schedule produced by this algorithm will, in the worst case, generate one interlock and use just one more register than that used by the optimal schedule. Spilling is minimized. The approximate algorithm is simple and has linear complexity.
Resumo:
We give a simple linear algebraic proof of the following conjecture of Frankl and Furedi [7, 9, 13]. (Frankl-Furedi Conjecture) if F is a hypergraph on X = {1, 2, 3,..., n} such that 1 less than or equal to /E boolean AND F/ less than or equal to k For All E, F is an element of F, E not equal F, then /F/ less than or equal to (i=0)Sigma(k) ((i) (n-1)). We generalise a method of Palisse and our proof-technique can be viewed as a variant of the technique used by Tverberg to prove a result of Graham and Pollak [10, 11, 14]. Our proof-technique is easily described. First, we derive an identity satisfied by a hypergraph F using its intersection properties. From this identity, we obtain a set of homogeneous linear equations. We then show that this defines the zero subspace of R-/F/. Finally, the desired bound on /F/ is obtained from the bound on the number of linearly independent equations. This proof-technique can also be used to prove a more general theorem (Theorem 2). We conclude by indicating how this technique can be generalised to uniform hypergraphs by proving the uniform Ray-Chaudhuri-Wilson theorem. (C) 1997 Academic Press.
Resumo:
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem,we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.