62 resultados para Error correction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The error introduced in depolarisation measurements due to the convergence of the incident beam has been investigated theoretically as well as experimentally for the case of colloid scattering, where the particles are not small compared to the wavelength of light. Assuming the scattering particles to be anisotropic rods, it is shown that, when the incident unpolarised light is condensed by means of a lens with a circular aperture, the observed depolarisation ratio ϱ u is given by ϱ u = ϱ u0 + 5/3 θ2 where ϱ u0 is the true depolarisation for incident parallel light, and θ the semi-angle of convergence. Appropriate formulae are derived when the incident beam is polarised vertically and horizontally. Experiments performed on six typical colloids support the theoretical conclusions. Other immediate consequences of the theory are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A state-of-the-art model of the coupled ocean-atmosphere system, the climate forecast system (CFS), from the National Centres for Environmental Prediction (NCEP), USA, has been ported onto the PARAM Padma parallel computing system at the Centre for Development of Advanced Computing (CDAC), Bangalore and retrospective predictions for the summer monsoon (June-September) season of 2009 have been generated, using five initial conditions for the atmosphere and one initial condition for the ocean for May 2009. Whereas a large deficit in the Indian summer monsoon rainfall (ISMR; June-September) was experienced over the Indian region (with the all-India rainfall deficit by 22% of the average), the ensemble average prediction was for above-average rainfall during the summer monsoon. The retrospective predictions of ISMR with CFS from NCEP for 1981-2008 have been analysed. The retrospective predictions from NCEP for the summer monsoon of 1994 and that from CDAC for 2009 have been compared with the simulations for each of the seasons with the stand-alone atmospheric component of the model, the global forecast system (GFS), and observations. It has been shown that the simulation with GFS for 2009 showed deficit rainfall as observed. The large error in the prediction for the monsoon of 2009 can be attributed to a positive Indian Ocean Dipole event seen in the prediction from July onwards, which was not present in the observations. This suggests that the error could be reduced with improvement of the ocean model over the equatorial Indian Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A swarm is a temporary structure formed when several thousand honey bees leave their hive and settle on some object such as the branch of a tree. They remain in this position until a suitable site for a new home is located by the scout bees. A continuum model based on heat conduction and heat generation is used to predict temperature profiles in swarms. Since internal convection is neglected, the model is applicable only at low values of the ambient temperature T-a. Guided by the experimental observations of Heinrich (1981a-c, J. Exp. Biol. 91, 25-55; Science 212, 565-566; Sci. Am. 244, 147-160), the analysis is carried out mainly for non-spherical swarms. The effective thermal conductivity is estimated using the data of Heinrich (1981a, J. Exp. Biol. 91, 25-55) for dead bees. For T-a = 5 and 9 degrees C, results based on a modified version of the heat generation function due to Southwick (1991, The Behaviour and Physiology of Bees, PP 28-47. C.A.B. International, London) are in reasonable agreement with measurements. Results obtained with the heat generation function of Myerscough (1993, J. Theor. Biol. 162, 381-393) are qualitatively similar to those obtained with Southwick's function, but the error is more in the former case. The results suggest that the bees near the periphery generate more heat than those near the core, in accord with the conjecture of Heinrich (1981c, Sci. Am. 244, 147-160). On the other hand, for T-a = 5 degrees C, the heat generation function of Omholt and Lonvik (1986, J. Theor. Biol. 120, 447-456) leads to a trivial steady state where the entire swarm is at the ambient temperature. Therefore an acceptable heat generation function must result in a steady state which is both non-trivial and stable with respect to small perturbations. Omholt and Lonvik's function satisfies the first requirement, but not the second. For T-a = 15 degrees C, there is a considerable difference between predicted and measured values, probably due to the neglect of internal convection in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of residual chlorine and organic matter govern the bacterial regrowth within a water distribution system. The bacterial growth model is essential to predict the spatial and temporal variation of all these substances throughout the system. The parameters governing the bacterial growth and biodegradable dissolved organic carbon (BDOC) utilization are difficult to determine by experimentation. In the present study, the estimation of these parameters is addressed by using simulation-optimization procedure. The optimal solution by genetic algorithm (GA) has indicated that the proper combination of parameter values are significant rather than correct individual values. The applicability of the model is illustrated using synthetic data generated by introducing noise in to the error-free measurements. The GA was found to be a potential tool in estimating the parameters controlling the bacterial growth and BDOC utilization. Further, the GA was also used for evaluating the sensitivity issues relating parameter values and objective function. It was observed that mu and k(cl) are more significant and dominating compared to the other parameters. But the magnitude of the parameters is also an important issue in deciding the dominance of a particular parameter. GA is found to be a useful tool in autocalibration of bacterial growth model and a sensitivity study of parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a simple current error space vector based hysteresis controller for two-level inverter fed Induction Motor (IM) drives. This proposed hysteresis controller retains all advantages of conventional current error space vector based hysteresis controllers like fast dynamic response, simple to implement, adjacent voltage vector switching etc. The additional advantage of this proposed hysteresis controller is that it gives a phase voltage frequency spectrum exactly similar to that of a constant switching frequency space vector pulse width modulated (SVPWM) inverter. In this proposed hysteresis controller the boundary is computed online using estimated stator voltages along alpha and beta axes thus completely eliminating look up tables used for obtaining parabolic hysteresis boundary proposed in. The estimation of stator voltage is carried out using current errors along alpha and beta axes and steady state model of induction motor. The proposed scheme is simple and capable of taking inverter upto six step mode operation, if demanded by drive system. The proposed hysteresis controller based inverter fed drive scheme is simulated extensively using SIMULINK toolbox of MATLAB for steady state and transient performance. The experimental verification for steady state performance of the proposed scheme is carried out on a 3.7kW IM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional subspace based speech enhancement (SSE)methods use linear minimum mean square error (LMMSE) estimation that is optimal if the Karhunen Loeve transform (KLT) coefficients of speech and noise are Gaussian distributed. In this paper, we investigate the use of Gaussian mixture (GM) density for modeling the non-Gaussian statistics of the clean speech KLT coefficients. Using Gaussian mixture model (GMM), the optimum minimum mean square error (MMSE) estimator is found to be nonlinear and the traditional LMMSE estimator is shown to be a special case. Experimental results show that the proposed method provides better enhancement performance than the traditional subspace based methods.Index Terms: Subspace based speech enhancement, Gaussian mixture density, MMSE estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter proposes a simple tuning algorithm for digital deadbeat control based on error correlation. By injecting a square-wave reference input and calculating the correlation of the control error, a gain correction for deadbeat control is obtained. The proposed solution is simple, it requires a short tuning time, and it is suitable for different DC-DC converter topologies. Simulation and experimental results on synchronous buck converters confirm the properties of the proposed tuning algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the emergence of voltage scaling as one of the most powerful power reduction techniques, it has been important to support voltage scalable statistical static timing analysis (SSTA) in deep submicrometer process nodes. In this paper, we propose a single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load. The number of simulation programs with integrated circuit emphasis (SPICE) required to create this model over a large voltage and temperature range is found to be modest and 4x less than that required for a conventional table-based approach with comparable accuracy. We show how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature. Our experimentation on ISCAS 85 benchmarks across a voltage range of 0.9-1.1V shows that the average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.85%. The errors in predicting the 99% and 1% probability point are 1.31% and 1%, respectively, with respect to SPICE. The two potential applications of voltage-aware SSTA have been presented, i.e., one for improving the accuracy of timing analysis by considering instance-specific voltage drops in power grids and the other for determining optimum supply voltage for target yield for dynamic voltage scaling applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Effective Exponential SNR Mapping (EESM) is an indispensable tool for analyzing and simulating next generation orthogonal frequency division multiplexing (OFDM) based wireless systems. It converts the different gains of multiple subchannels, over which a codeword is transmitted, into a single effective flat-fading gain with the same codeword error rate. It facilitates link adaptation by helping each user to compute an accurate channel quality indicator (CQI), which is fed back to the base station to enable downlink rate adaptation and scheduling. However, the highly non-linear nature of EESM makes a performance analysis of adaptation and scheduling difficult; even the probability distribution of EESM is not known in closed-form. This paper shows that EESM can be accurately modeled as a lognormal random variable when the subchannel gains are Rayleigh distributed. The model is also valid when the subchannel gains are correlated in frequency or space. With some simplifying assumptions, the paper then develops a novel analysis of the performance of LTE's two CQI feedback schemes that use EESM to generate CQI. The comprehensive model and analysis quantify the joint effect of several critical components such as scheduler, multiple antenna mode, CQI feedback scheme, and EESM-based feedback averaging on the overall system throughput.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A class of model reference adaptive control system which make use of an augmented error signal has been introduced by Monopoli. Convergence problems in this attractive class of systems have been investigated in this paper using concepts from hyperstability theory. It is shown that the condition on the linear part of the system has to be stronger than the one given earlier. A boundedness condition on the input to the linear part of the system has been taken into account in the analysis - this condition appears to have been missed in the previous applications of hyperstability theory. Sufficient conditions for the convergence of the adaptive gain to the desired value are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near-infrared diffuse optical tomography (DOT) technique has the capability of providing good quantitative reconstruction of tissue absorption and scattering properties with additional inputs such as input and output modulation depths and correction for the photon leakage. We have calculated the two-dimensional (2D) input modulation depth from three-dimensional (3D) diffusion to model the 2D diffusion of photons. The photon leakage when light traverses from phantom to the fiber tip is estimated using a solid angle model. The experiments are carried for single (5 and 6 mm) as well as multiple inhomogeneities (6 and 8 mm) with higher absorption coefficient in a homogeneous phantom. Diffusion equation for photon transport is solved using finite element method and Jacobian is modeled for reconstructing the optical parameters. We study the development and performance of DOT system using modulated single light source and multiple detectors. The dual source methods are reported to have better reconstruction capabilities to resolve and localize single as well as multiple inhomogeneities because of its superior noise rejection capability. However, an experimental setup with dual sources is much more difficult to implement because of adjustment of two out of phase identical light probes symmetrically on either side of the detector during scanning time. Our work shows that with a relatively simpler system with a single source, the results are better in terms of resolution and localization. The experiments are carried out with 5 and 6 mm inhomogeneities separately and 6 and 8 mm inhomogeneities both together with absorption coefficient almost three times as that of the background. The results show that our experimental single source system with additional inputs such as 2D input/output modulation depth and air fiber interface correction is capable of detecting 5 and 6 mm inhomogeneities separately and can identify the size difference of multiple inhomogeneities such as 6 and 8 mm. The localization error is zero. The recovered absorption coefficient is 93% of inhomogeneity that we have embedded in experimental phantom.