111 resultados para LARGE SYSTEMS
em Indian Institute of Science - Bangalore - Índia
Resumo:
In this paper, we consider the synthesis of decentralized dynamic compensators for large systems. The eliminant approach is used to obtain sufficient conditions for the existence of proper, stable, decentralized observer-controllers for stabilizing a large system. An illustrative example is given.
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).
Resumo:
A model of the precipitation process in reverse micelles has been developed to calculate the size of fine particles obtained therein. While the method shares several features of particle nucleation and growth common to precipitation in large systems, complexities arise in describing the processes of nucleation, due to the extremely small size of a micelle and of particle growth caused by fusion among the micelles. Occupancy of micelles by solubilized molecules is governed by Poisson statistics, implying most of them are empty and cannot nucleate of its own. The model therefore specifies the minimum number of solubilized molecules required to form a nucleus which is used to calculate the homogeneous nucleation rate. Simultaneously, interaction between micelles is assumed to occur by Brownian collision and instantaneous fusion. Analysis of time scales of various events shows growth of particles to be very fast compared to other phenomena occurring. This implies that nonempty micelles either are supersaturated or contain a single precipitated particle and allows application of deterministic population balance equations to describe the evolution of the system with time. The model successfully predicts the experimental measurements of Kandori ct al.(3) on the size of precipitated CaCO3 particles, obtained by carbonation of reverse micelles containing aqueous Ca(OH)(2) solution.
Resumo:
Since the time of Kirkwood, observed deviations in magnitude of the dielectric constant of aqueous protein solution from that of neat water (similar to 80) and slower decay of polarization have been subjects of enormous interest, controversy, and debate. Most of the common proteins have large permanent dipole moments (often more than 100 D) that can influence structure and dynamics of even distant water molecules, thereby affecting collective polarization fluctuation of the solution, which in turn can significantly alter solution's dielectric constant. Therefore, distance dependence of polarization fluctuation can provide important insight into the nature of biological water. We explore these aspects by studying aqueous solutions of four different proteins of different characteristics and varying sizes, chicken villin headpiece subdomain (HP-36), immunoglobulin binding domain protein G (GB1), hen-egg white lysozyme (LYS), and Myoglobin (MYO). We simulate fairly large systems consisting of single protein molecule and 20000-30000 water molecules (varied according to the protein size), providing a concentration in the range of similar to 2-3 mM. We find that the calculated dielectric constant of the system shows a noticeable increment in all the cases compared to that of neat water. Total dipole moment auto time correlation function of water < dM(W) (0)delta M-W (t) > is found to be sensitive to the nature of the protein. Surprisingly, dipole moment of the protein and total dipole moment of the water molecules are found to be only weakly coupled. Shellwise decomposition of water molecules around protein reveals higher density of first layer compared to the succeeding ones. We also calculate heuristic effective dielectric constant of successive layers and find that the layer adjacent to protein has much lower value (similar to 50). However, progressive layers exhibit successive increment of dielectric constant, finally reaching a value close to that of bulk 4-5 layers away. We also calculate shellwise orientational correlation function and tetrahedral order parameter to understand the local dynamics and structural re-arrangement of water. Theoretical analysis providing simple method for calculation of shellwise local dielectric constant and implication of these findings are elaborately discussed in the present work. (C) 2014 AIP Publishing LLC.
Resumo:
We study the equilibrium properties of the nearest-neighbor Ising antiferromagnet on a triangular lattice in the presence of a staggered field conjugate to one of the degenerate ground states. Using a mapping of the ground states of the model without the staggered field to dimer coverings on the dual lattice, we classify the ground states into sectors specified by the number of "strings." We show that the effect of the staggered field is to generate long-range interactions between strings. In the limiting case of the antiferromagnetic coupling constant J becoming infinitely large, we prove the existence of a phase transition in this system and obtain a finite lower bound for the transition temperature. For finite J, we study the equilibrium properties of the system using Monte Carlo simulations with three different dynamics. We find that in all the three cases, equilibration times for low-field values increase rapidly with system size at low temperatures. Due to this difficulty in equilibrating sufficiently large systems at low temperatures, our finite-size scaling analysis of the numerical results does not permit a definite conclusion about the existence of st phase transition for finite values of J. A surprising feature in the system is the fact that unlike usual glassy systems; a zero-temperature quench almost always leads to the ground state, while a slow cooling does not.
Resumo:
The machine replication of human reading has been the subject of intensive research for more than three decades. A large number of research papers and reports have already been published on this topic. Many commercial establishments have manufactured recognizers of varying capabilities. Handheld, desk-top, medium-size and large systems costing as high as half a million dollars are available, and are in use for various applications. However, the ultimate goal of developing a reading machine having the same reading capabilities of humans still remains unachieved. So, there still is a great gap between human reading and machine reading capabilities, and a great amount of further effort is required to narrow-down this gap, if not bridge it. This review is organized into six major sections covering a general overview (an introduction), applications of character recognition techniques, methodologies in character recognition, research work in character recognition, some practical OCRs and the conclusions.
Resumo:
Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!
Resumo:
Current methods for molecular simulations of Electric Double Layer Capacitors (EDLC) have both the electrodes and the electrolyte region in a single simulation box. This necessitates simulation of the electrode-electrolyte region interface. Typical capacitors have macroscopic dimensions where the fraction of the molecules at the electrode-electrolyte region interface is very low. Hence, large systems sizes are needed to minimize the electrode-electrolyte region interfacial effects. To overcome these problems, a new technique based on the Gibbs Ensemble is proposed for simulation of an EDLC. In the proposed technique, each electrode is simulated in a separate simulation box. Application of periodic boundary conditions eliminates the interfacial effects. This in addition to the use of constant voltage ensemble allows for a more convenient comparison of simulation results with experimental measurements on typical EDLCs. (C) 2014 AIP Publishing LLC.
Resumo:
The field of micro-/nano-mechanics of materials has been driven, on the one hand by the development of ever smaller structures in devices, and, on the other, by the need to map property variations in large systems that are microstructurally graded. Observations of `smaller is stronger' have also brought in questions of accompanying fracture property changes in the materials. In the wake of scattered articles on micro-scale fracture testing of various material classes, this review attempts to provide a holistic picture of the current state of the art. In the process, various reliable micro-scale geometries are shown, challenges with respect to instrumentation to probe ever smaller length scales are discussed and examples from recent literature are put together to exhibit the expanse of unusual fracture response of materials, from ductility in Si to brittleness in Pt. Outstanding issues related to fracture mechanics of small structures are critically examined for plausible solutions.
Resumo:
A minimax filter is derived to estimate the state of a system, using observations corrupted by colored noise, when large uncertainties in the plant dynamics and process noise are presen.
Resumo:
This paper presents a method of designing a minimax filter in the presence of large plant uncertainties and constraints on the mean squared values of the estimates. The minimax filtering problem is reformulated in the framework of a deterministic optimal control problem and the method of solution employed, invokes the matrix Minimum Principle. The constrained linear filter and its relation to singular control problems has been illustrated. For the class of problems considered here it is shown that the filter can he constrained separately after carrying out the mini maximization. Numorieal examples are presented to illustrate the results.
Resumo:
This paper deals with low maximum-likelihood (ML)-decoding complexity, full-rate and full-diversity space-time block codes (STBCs), which also offer large coding gain, for the 2 transmit antenna, 2 receive antenna (2 x 2) and the 4 transmit antenna, 2 receive antenna (4 x 2) MIMO systems. Presently, the best known STBC for the 2 2 system is the Golden code and that for the 4 x 2 system is the DjABBA code. Following the approach by Biglieri, Hong, and Viterbo, a new STBC is presented in this paper for the 2 x 2 system. This code matches the Golden code in performance and ML-decoding complexity for square QAM constellations while it has lower ML-decoding complexity with the same performance for non-rectangular QAM constellations. This code is also shown to be information-lossless and diversity-multiplexing gain (DMG) tradeoff optimal. This design procedure is then extended to the 4 x 2 system and a code, which outperforms the DjABBA code for QAM constellations with lower ML-decoding complexity, is presented. So far, the Golden code has been reported to have an ML-decoding complexity of the order of for square QAM of size. In this paper, a scheme that reduces its ML-decoding complexity to M-2 root M is presented.
Resumo:
In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.