187 resultados para physically-based simulation
Resumo:
This paper proposes a method of sharing power/energy between multiple sources and multiple loads using an integrated magnetic circuit as a junction between sources and sinks. It also presents a particular use of the magnetic circuit as an ac power supply, delivering sinusoidal voltage to load irrespective of the presence of the grid, taking only active power from the grid. The proposed magnetic circuit is a three-energy-port unit, viz.: 1) power/energy from grid; 2) power energy from battery-inverter unit; and 3) power/energy delivery to the load in its particular application as quality ac power supply (QPS). The product provides sinusoidal regulated output voltage, input power-factor correction, electrical isolation between the sources and loads, low battery voltage, and control simplicity. Unlike conventional series-shunt-compensated uninterruptible power supply topologies with low battery voltage, the isolation is provided using a single magnetic circuit that results in a smaller size and lower cost. The circuit operating principles and analysis, as well as simulation and experimental results, are presented for this QPS.
Resumo:
In this paper, we present robust semi-blind (SB) algorithms for the estimation of beamforming vectors for multiple-input multiple-output wireless communication. The transmitted symbol block is assumed to comprise of a known sequence of training (pilot) symbols followed by information bearing blind (unknown) data symbols. Analytical expressions are derived for the robust SB estimators of the MIMO receive and transmit beamforming vectors. These robust SB estimators employ a preliminary estimate obtained from the pilot symbol sequence and leverage the second-order statistical information from the blind data symbols. We employ the theory of Lagrangian duality to derive the robust estimate of the receive beamforming vector by maximizing an inner product, while constraining the channel estimate to lie in a confidence sphere centered at the initial pilot estimate. Two different schemes are then proposed for computing the robust estimate of the MIMO transmit beamforming vector. Simulation results presented in the end illustrate the superior performance of the robust SB estimators.
Resumo:
A new linear algebraic approach for identification of a nonminimum phase FIR system of known order using only higher order (>2) cumulants of the output process is proposed. It is first shown that a matrix formed from a set of cumulants of arbitrary order can be expressed as a product of structured matrices. The subspaces of this matrix are then used to obtain the parameters of the FIR system using a set of linear equations. Theoretical analysis and numerical simulation studies are presented to characterize the performance of the proposed methods.
Resumo:
The present paper aims at studying the performance characteristics of a subspace based algorithm for source localization in shallow water such as coastal water. Specifically, we study the performance of Multi Image Subspace Algorithm (MISA). Through first-order perturbation analysis and computer simulation it is shown that MISA is unbiased and statistically efficient. Further, we bring out the role of multipaths (or images) in reducing the error in the localization. It is shown that the presence of multipaths is found to improve the range and depth estimates. This may be attributed to the increased curvature of the wavefront caused by interference from many coherent multipaths.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
It is well known that the increasing space activities pose a serious threat to future missions. This is mainly due to the presence of spent stages, rockets spacecraft and fragments which can lead to collisions. The calculation of the collision probability of future space vehicles with the orbital debris is necessary for estimating the risk. There is lack of adequately catalogued and openly available detailed information on the explosion characteristics of trackable and untrackable debris data. Such a situation compels one to develop suitable mathematical modelling of the explosion and the resultant debris environment. Based on a study of the available information regarding the fragmentation, subsequent evolution and observation, it turns out to be possible to develop such a mathematical model connecting the dynamical features of the fragmentation with the geometrical/orbital characteristics of the debris and representing the environment through the idea of equivalent breakup. (C) 1997 COSPAR.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.
Resumo:
The compositional evolution in sputter deposited LiCoO(2) thin films is influenced by process parameters involved during deposition. The electrochemical performance of these films strongly depends on their microstructure, preferential orientation and stoichiometry. The transport process of sputtered Li and Co atoms from the LiCoO(2) target to the substrate, through Ar plasma in a planar magnetron configuration, was investigated based on the Monte Carlo technique. The effect of sputtering gas pressure and the substrate-target distance (d(st)) on Li/Co ratio, as well as, energy and angular distribution of sputtered atoms on the substrate were examined. Stable Li/Co ratios have been obtained at 5 Pa pressure and d(st) in the range 5 11 cm. The kinetic energy and incident angular distribution of Li and Co atoms reaching the substrate have been found to be dependent on sputtering pressure. Simulations were extended to predict compositional variations in films prepared at various process conditions. These results were compared with the composition of films determined experimentally using x-ray photoelectron spectroscopy (XPS). Li/Co ratio calculated using XPS was in moderate agreement with that of the simulated value. The measured film thickness followed the same trend as predicted by simulation. These studies are shown to be useful in understanding the complexities in multicomponent sputtering. (C) 2011 American Institute of Physics. doi:10.1063/1.3597829]
Resumo:
Approximate deconvolution modeling is a very recent approach to large eddy simulation of turbulent flows. It has been applied to compressible flows with success. Here, a premixed flame which forms in the wake of a flameholder has been selected to examine the subgrid-scale modeling of reaction rate by this new method because a previous plane two-dimensional simulation of this wake flame, using a wrinkling function and artificial flame thickening, had revealed discrepancies when compared with experiment. The present simulation is of the temporal evolution of a round wakelike flow at two Reynolds numbers, Re = 2000 and 10,000, based on wake defect velocity and wake diameter. A Fourier-spectral code has been used. The reaction is single-step and irreversible, and the rate follows an Arrhenius law. The reference simulation at the lower Reynolds number is fully resolved. At Re = 10,000, subgrid-scale contributions are significant. It was found that subgrid-scale modeling in the present simulation agrees more closely with unresolved subgrid-scale effects observed in experiment. Specifically, the highest contributions appeared in thin folded regions created by vortex convection. The wrinkling function approach had not selected subgrid-scale effects in these regions.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A pseudo-spectral method based on Fourier expansions in a Cartesian coordinate system is shown to be an economical method for direct numerical simulation studies of transitional round jets, Several characteristics of the solutions are presented to establish the validity of the solutions in spite of the unnatural choices. We show that neither periodicity, nor the use of a Cartesian system have adversely affected the simulations, Instead, there are benefits in terms of ease of computing and lack of the usual restrictions due to grid structure near the jet axis. By computing the simultaneous evolution of passive scalers, the process of reaction in round jet burners, between a fuel-laden jet and an ambient oxidizer, was also simulated. Some typical solutions are shown and then the results of analysis of these data are summarized. (C) 2001 Elsevier Science Ltd, All rights reserved.
Resumo:
We are concerned with the situation in which a wireless sensor network is deployed in a region, for the purpose of detecting an event occurring at a random time and at a random location. The sensor nodes periodically sample their environment (e.g., for acoustic energy),process the observations (in our case, using a CUSUM-based algorithm) and send a local decision (which is binary in nature) to the fusion centre. The fusion centre collects these local decisions and uses a fusion rule to process the sensors’ local decisions and infer the state of nature, i.e., if an event has occurred or not. Our main contribution is in analyzing two local detection rules in combination with a simple fusion rule. The local detection algorithms are based on the nonparametric CUSUMprocedure from sequential statistics. We also propose two ways to operate the local detectors after an alarm. These alternatives when combined in various ways yield several approaches. Our contribution is to provide analytical techniques to calculate false alarm measures, by the use of which the local detector thresholds can be set. Simulation results are provided to evaluate the accuracy of our analysis. As an illustration we provide a design example. We also use simulations to compare the detection delays incurred in these algorithms.
Resumo:
In this paper analytical expressions for optimal Vdd and Vth to minimize energy for a given speed constraint are derived. These expressions are based on the EKV model for transistors and are valid in both strong inversion and sub threshold regions. The effect of gate leakage on the optimal Vdd and Vth is analyzed. A new gradient based algorithm for controlling Vdd and Vth based on delay and power monitoring results is proposed. A Vdd-Vth controller which uses the algorithm to dynamically control the supply and threshold voltage of a representative logic block (sum of absolute difference computation of an MPEG decoder) is designed. Simulation results using 65 nm predictive technology models are given.
Resumo:
To establish itself within the host system, Mycobacterium tuberculosis (Mtb) has formulated various means of attacking the host system. One such crucial strategy is the exploitation of the iron resources of the host system. Obtaining and maintaining the required concentration of iron becomes a matter of contest between the host and the pathogen, both trying to achieve this through complex molecular networks. The extent of complexity makes it important to obtain a systems perspective of the interplay between the host and the pathogen with respect to iron homeostasis. We have reconstructed a systems model comprising 92 components and 85 protein-protein or protein-metabolite interactions, which have been captured as a set of 194 rules. Apart from the interactions, these rules also account for protein synthesis and decay, RBC circulation and bacterial production and death rates. We have used a rule-based modelling approach, Kappa, to simulate the system separately under infection and non-infection conditions. Various perturbations including knock-outs and dual perturbation were also carried out to monitor the behavioral change of important proteins and metabolites. From this, key components as well as the required controlling factors in the model that are critical for maintaining iron homeostasis were identified. The model is able to re-establish the importance of iron-dependent regulator (ideR) in Mtb and transferrin (Tf) in the host. Perturbations, where iron storage is increased, appear to enhance nutritional immunity and the analysis indicates how they can be harmful for the host. Instead, decreasing the rate of iron uptake by Tf may prove to be helpful. Simulation and perturbation studies help in identifying Tf as a possible drug target. Regulating the mycobactin (myB) concentration was also identified as a possible strategy to control bacterial growth. The simulations thus provide significant insight into iron homeostasis and also for identifying possible drug targets for tuberculosis.