381 resultados para Linear quadratic regulator (LQR)
Resumo:
We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.
Resumo:
This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.
Resumo:
Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
We present a study correlating uniaxial stress in a polymer with its underlying structure when it is strained. The uniaxial stress is significantly influenced by the mean-square bond length and mean bond angle. In contrast, the size and shape of the polymer, typically represented by the end-to-end length, mass ratio, and radius of gyration, contribute negligibly. Among externally set control variables, density and polymer chain length play a critical role in influencing the anisotropic uniaxial stress. Short chain polymers more or less behave like rigid molecules. Temperature and rate of loading, in the range considered, have a very mild effect on the uniaxial stress.
Resumo:
In this study we determined the molecular mechanisms of how homocysteine differentially affects receptor activator of nuclear factor-kappa B ligand (RANKL) and osteoprotegerin (OPG) synthesis in the bone. The results showed that oxidative stress induced by homocysteine deranges insulin-sensitive FOXO1 and MAP kinase signaling cascades to decrease OPG and increase RANKL synthesis in osteoblast cultures. We observed that downregulation of insulin/FOXO1 and p38 MAP kinase signaling mechanisms due to phosphorylation of protein phosphatase 2 A (PP2A) was the key event that inhibited OPG synthesis in homocysteine-treated osteoblast cultures. siRNA knockdown experiments confirmed that FOXO1 is integral to OPG and p38 synthesis. Conversely homocysteine increased RANKL synthesis in osteoblasts through c-Jun/JNK MAP kinase signaling mechanisms independent of FOXO1. In the rat bone milieu, high-methionine diet-induced hyperhomocysteinemia lowered FOXO1 and OPG expression and increased synthesis of proresorptive and inflammatory cytokines such as RANKL, M-CSF, IL-1 alpha, IL-1 beta, G-CSF, GM-CSF, MIP-1 alpha, IFN-gamma, IL-17, and TNF-alpha. Such pathophysiological conditions were exacerbated by ovariectomy. Lowering the serum homocysteine level by a simultaneous supplementation with N-acetylcysteine improved OPG and FOXO1 expression and partially antagonized RANKL and proresorptive cytokine synthesis in the bone milieu. These results emphasize that hyperhomocysteinemia alters the redox regulatory mechanism in the osteoblast by activating PP2A and deranging FOXO1 and MAPK signaling cascades, eventually shifting the OPG:RANKL ratio toward increased osteoclast activity and decreased bone quality (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
We propose an eigenvalue based technique to solve the Homogeneous Quadratic Constrained Quadratic Programming problem (HQCQP) with at most three constraints which arise in many signal processing problems. Semi-Definite Relaxation (SDR) is the only known approach and is computationally intensive. We study the performance of the proposed fast eigen approach through simulations in the context of MIMO relays and show that the solution converges to the solution obtained using the SDR approach with significant reduction in complexity.
Resumo:
The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.
Resumo:
We study absorption spectra and two photon absorption coefficient of expanded porphyrins (EPs) by the density matrix renormalization group (DMRG) technique. We employ the Pariser-Parr-Pople (PPP) Hamiltonian which includes long-range electron-electron interactions. We find that, in the 4n+2 EPs, there are two prominent low-lying one-photon excitations, while in 4n EPs, there is only one such excitation. We also find that 4n+2 EPs have large two-photon absorption cross sections compared to 4n EPs. The charge density rearrangement in the one-photon excited state is mostly at the pyrrole nitrogen site and at the meso carbon sites. In the two-photon states, the charge density rearrangement occurs mostly at the aza-ring sites. In the one-photon state, the C-C bond length in aza rings shows a tendency to become uniform. In the two-photon state, the bond distortions are on C-N bonds of the pyrrole ring and the adjoining C-C bonds which connect the pyrrole ring to the aza or meso carbon sites.
Resumo:
The algebraic formulation for linear network coding in acyclic networks with each link having an integer delay is well known. Based on this formulation, for a given set of connections over an arbitrary acyclic network with integer delay assumed for the links, the output symbols at the sink nodes at any given time instant is a Fq-linear combination of the input symbols across different generations, where Fq denotes the field over which the network operates. We use finite-field discrete Fourier transform (DFT) to convert the output symbols at the sink nodes at any given time instant into a Fq-linear combination of the input symbols generated during the same generation. We call this as transforming the acyclic network with delay into n-instantaneous networks (n is sufficiently large). We show that under certain conditions, there exists a network code satisfying sink demands in the usual (non-transform) approach if and only if there exists a network code satisfying sink demands in the transform approach. Furthermore, assuming time invariant local encoding kernels, we show that the transform method can be employed to achieve half the rate corresponding to the individual source-destination mincut (which are assumed to be equal to 1) for some classes of three-source three-destination multiple unicast network with delays using alignment strategies when the zero-interference condition is not satisfied.
Resumo:
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search.
Resumo:
Earlier work on cyclic pursuit systems has shown that using heterogeneous gains for agents in linear cyclic pursuit, the point of convergence (rendezvous point) can be chosen arbitrarily. But there are some restrictions on this set of reachable points. The use of deviated cyclic pursuit, as discussed in this paper, expands this set of reachable points to include points which are not reachable by any known linear cyclic pursuit scheme. The limits on the deviations are determined by stability considerations. Such limits have been analytically obtained in this paper along with results on the expansion in reachable set and the latter has also been verified through simulations.
Resumo:
Three codes, that can solve three dimensional linear elastostatic problems using constant boundary elements while ignoring body forces, are provided here. The file 'bemconst.m' contains a MATLAB code for solving three dimensional linear elastostatic problems using constant boundary elements while ignoring body forces. The file 'bemconst.f90' is a Fortran translation of the MATLAB code contained in the file 'bemconst.m'. The file 'bemconstp.f90' is a parallelized version of the Fortran code contained in the file 'bemconst.f90'. The file 'inbem96.txt' is the input file for the Fortran codes contained in the files 'bemconst.f90' and 'bemconstp.f90'. Author hereby declares that the present codes are the original works of the author. Further, author hereby declares that any of the present codes, in full or in part, is not a translation or a copy of any of the existing codes written by someone else. Author's institution (Indian Institute of Science) has informed the author in writing that the institution is not interested in claiming any copyright on the present codes. Author is hereby distributing the present codes under the MIT License; full text of the license is included in each of the files that contain the codes.
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.