93 resultados para Discrete analytic function theory
Resumo:
The specific objective of this paper is to develop direct digital control strategies for an ammonia reactor using quadratic regulator theory and compare the performance of the resultant control system with that under conventional PID regulators. The controller design studies are based on a ninth order state-space model obtained from the exact nonlinear distributed model using linearization and lumping approximations. The evaluation of these controllers with reference to their disturbance rejection capabilities and transient response characteristics, is carried out using hybrid computer simulation.
Resumo:
The use of Wiener–Lee transforms to construct one of the frequency characteristics, magnitude or phase of a network function, when the other characteristic is given graphically, is indicated. This application is useful in finding a realisable network function whose magnitude or phase curve is given. A discrete version of the transform is presented, so that a digital computer can be employed for the computation.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.
Resumo:
Water brings its remarkable thermodynamic and dynamic anomalies in the pure liquid state to biological world where water molecules face a multitude of additional interactions that frustrate its hydrogen bond network. Yet the water molecules participate and control enormous number of biological processes in manners which are yet to be understood at a molecular level. We discuss thermodynamics, structure, dynamics and properties of water around proteins and DNA, along with those in reverse micelles. We discuss the roles of water in enzyme kinetics, in drug-DNA intercalation and in kinetic-proof reading ( the theory of lack of errors in biosynthesis). We also discuss how water may play an important role in the natural selection of biomolecules. (C) 2011 Elsevier B. V. All rights reserved.
Resumo:
An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
High temperature superconductivity in the cuprates remains one of the most widely investigated, constantly surprising and poorly understood phenomena in physics. Here, we describe briefly a new phenomenological theory inspired by the celebrated description of superconductivity due to Ginzburg and Landau and believed to describe its essence. This posits a free energy functional for the superconductor in terms of a complex order parameter characterizing it. We propose that there is, for superconducting cuprates, a similar functional of the complex, in plane, nearest neighbor spin singlet bond (or Cooper) pair amplitude psi(ij). Further, we suggest that a crucial part of it is a (short range) positive interaction between nearest neighbor bond pairs, of strength J'. Such an interaction leads to nonzero long wavelength phase stiffness or superconductive long range order, with the observed d-wave symmetry, below a temperature T-c similar to zJ' where z is the number of nearest neighbors; d-wave superconductivity is thus an emergent, collective consequence. Using the functional, we calculate a large range of properties, e. g., the pseudogap transition temperature T* as a function of hole doping x, the transition curve T-c(x), the superfluid stiffness rho(s)(x, T), the specific heat (without and with a magnetic field) due to the fluctuating pair degrees of freedom and the zero temperature vortex structure. We find remarkable agreement with experiment. We also calculate the self-energy of electrons hopping on the square cuprate lattice and coupled to electrons of nearly opposite momenta via inevitable long wavelength Cooper pair fluctuations formed of these electrons. The ensuing results for electron spectral density are successfully compared with recent experimental results for angle resolved photo emission spectroscopy (ARPES), and comprehensively explain strange features such as temperature dependent Fermi arcs above T-c and the ``bending'' of the superconducting gap below T-c.
Resumo:
We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Resumo:
We revisit the extraction of alpha(s)(M-tau(2)) from the QCD perturbative corrections to the hadronic tau branching ratio, using an improved fixed-order perturbation theory based on the explicit summation of all renormalization-group accessible logarithms, proposed some time ago in the literature. In this approach, the powers of the coupling in the expansion of the QCD Adler function are multiplied by a set of functions D-n, which depend themselves on the coupling and can be written in a closed form by iteratively solving a sequence of differential equations. We find that the new expansion has an improved behavior in the complex energy plane compared to that of the standard fixed-order perturbation theory (FOPT), and is similar but not identical to the contour-improved perturbation theory (CIPT). With five terms in the perturbative expansion we obtain in the (MS) over bar scheme alpha(s)(M-tau(2)) = 0.338 +/- 0.010, using as input a precise value for the perturbative contribution to the hadronic width of the tau lepton reported recently in the literature.
Resumo:
Recent simulations of the stretching of tethered biopolymers at a constant speed v (Ponmurugan and Vemparala, 2011 Phys. Rev. E 84 060101(R)) have suggested that for any time t, the distribution of the fluctuating forces f responsible for chain deformation is governed by a relation of the form P(+ f)/ P(- f) = expgamma f], gamma being a coefficient that is solely a function of v and the temperature T. This result, which is reminiscent of the fluctuation theorems applicable to stochastic trajectories involving thermodynamic variables, is derived in this paper from an analytical calculation based on a generalization of Mazonka and Jarzynski's classic model of dragged particle dynamics Mazonka and Jarzynski, 1999 arXiv:cond-\textbackslashmat/9912121v1]. However, the analytical calculations suggest that the result holds only if t >> 1 and the force fluctuations are driven by white rather than colored noise; they further suggest that the coefficient gamma in the purported theorem varies not as v(0.15)T-(0.7), as indicated by the simulations, but as vT(-1).
Resumo:
The Morse-Smale complex is a topological structure that captures the behavior of the gradient of a scalar function on a manifold. This paper discusses scalable techniques to compute the Morse-Smale complex of scalar functions defined on large three-dimensional structured grids. Computing the Morse-Smale complex of three-dimensional domains is challenging as compared to two-dimensional domains because of the non-trivial structure introduced by the two types of saddle criticalities. We present a parallel shared-memory algorithm to compute the Morse-Smale complex based on Forman's discrete Morse theory. The algorithm achieves scalability via synergistic use of the CPU and the GPU. We first prove that the discrete gradient on the domain can be computed independently for each cell and hence can be implemented on the GPU. Second, we describe a two-step graph traversal algorithm to compute the 1-saddle-2-saddle connections efficiently and in parallel on the CPU. Simultaneously, the extremasaddle connections are computed using a tree traversal algorithm on the GPU.
Resumo:
The van der Waals and Platteuw (vdVVP) theory has been successfully used to model the thermodynamics of gas hydrates. However, earlier studies have shown that this could be due to the presence of a large number of adjustable parameters whose values are obtained through regression with experimental data. To test this assertion, we carry out a systematic and rigorous study of the performance of various models of vdWP theory that have been proposed over the years. The hydrate phase equilibrium data used for this study is obtained from Monte Carlo molecular simulations of methane hydrates. The parameters of the vdWP theory are regressed from this equilibrium data and compared with their true values obtained directly from simulations. This comparison reveals that (i) methane-water interactions beyond the first cage and methane-methane interactions make a significant contribution to the partition function and thus cannot be neglected, (ii) the rigorous Monte Carlo integration should be used to evaluate the Langmuir constant instead of the spherical smoothed cell approximation, (iii) the parameter values describing the methane-water interactions cannot be correctly regressed from the equilibrium data using the vdVVP theory in its present form, (iv) the regressed empty hydrate property values closely match their true values irrespective of the level of rigor in the theory, and (v) the flexibility of the water lattice forming the hydrate phase needs to be incorporated in the vdWP theory. Since methane is among the simplest of hydrate forming molecules, the conclusions from this study should also hold true for more complicated hydrate guest molecules.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.