904 resultados para BARTLETT CORRECTION
Resumo:
In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.
Resumo:
First, the non-linear response of a gyrostabilized platform to a small constant input torque is analyzed in respect to the effect of the time delay (inherent or deliberately introduced) in the correction torque supplied by the servomotor, which itself may be non-linear to a certain extent. The equation of motion of the platform system is a third order nonlinear non-homogeneous differential equation. An approximate analytical method of solution of this equation is utilized. The value of the delay at which the platform response becomes unstable has been calculated by using this approximate analytical method. The procedure is illustrated by means of a numerical example. Second, the non-linear response of the platform to a random input has been obtained. The effects of several types of non-linearity on reducing the level of the mean square response have been investigated, by applying the technique of equivalent linearization and solving the resulting integral equations by using laguerre or Gaussian integration techniques. The mean square responses to white noise and band limited white noise, for various values of the non-linear parameter and for different types of non-linearity function, have been obtained. For positive values of the non-linear parameter the levels of the non-linear mean square responses to both white noise and band-limited white noise are low as compared to the linear mean square response. For negative values of the non-linear parameter the level of the non-linear mean square response at first increases slowly with increasing values of the non-linear parameter and then suddenly jumps to a high level, at a certain value of the non-linearity parameter.
Resumo:
In correlation filtering we attempt to remove that component of the aeromagnetic field which is closely related to the topography. The magnetization vector is assumed to be spatially variable, but it can be successively estimated under the additional assumption that the magnetic component due to topography is uncorrelated with the magnetic signal of deeper origin. The correlation filtering was tested against a synthetic example. The filtered field compares very well with the known signal of deeper origin. We have also applied this method to real data from the south Indian shield. It is demonstrated that the performance of the correlation filtering is superior in situations where the direction of magnetization is variable, for example, where the remnant magnetization is dominant.
Resumo:
We discuss three methods to correct spherical aberration for a point to point imaging system. First, results obtained using Fermat's principle and the ray tracing method are described briefly. Next, we obtain solutions using Lie algebraic techniques. Even though one cannot always obtain analytical results using this method, it is often more powerful than the first method. The result obtained with this approach is compared and found to agree with the exact result of the first method.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.
Resumo:
Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.
Resumo:
General circulation models (GCMs) are routinely used to simulate future climatic conditions. However, rainfall outputs from GCMs are highly uncertain in preserving temporal correlations, frequencies, and intensity distributions, which limits their direct application for downscaling and hydrological modeling studies. To address these limitations, raw outputs of GCMs or regional climate models are often bias corrected using past observations. In this paper, a methodology is presented for using a nested bias-correction approach to predict the frequencies and occurrences of severe droughts and wet conditions across India for a 48-year period (2050-2099) centered at 2075. Specifically, monthly time series of rainfall from 17 GCMs are used to draw conclusions for extreme events. An increasing trend in the frequencies of droughts and wet events is observed. The northern part of India and coastal regions show maximum increase in the frequency of wet events. Drought events are expected to increase in the west central, peninsular, and central northeast regions of India. (C) 2013 American Society of Civil Engineers.
Resumo:
There is a strong relation between sparse signal recovery and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve burst error correction problem using block sparse signal recovery methods. We construct partial Fourier based encoding and decoding matrices using results on difference sets. These constructions offer guaranteed and efficient error correction when used in conjunction with reconstruction algorithms which exploit block sparsity.
Resumo:
Propranolol, a beta-adrenergic receptor blocker, is presently considered to be a potential therapeutic intervention under investigation for its role in prevention and treatment of osteoporosis. However, no studies have compared the osteoprotective properties of propranolol with well accepted therapeu-tic interventions for the treatment of osteoporosis. To address this question, this study was designed to evaluate the bone protective effects of zoledronic acid, alfacalcidol and propranolol in an animal model of postmenopausal osteoporosis. Five days after ovariectomy, 36 ovariectomized (OVX) rats were divided in- to 6 equal groups, randomized to treatments zoledronic acid (100 μg/kg, intravenous single dose); alfacal-cidol (0.5 μg/kg, oral gauge daily); propranolol (0.1mg/kg, subcutaneously 5 days per week) for 12 weeks. Untreated OVX and sham OVX were used as controls. At the end of the study, rats were killed under anesthesia. For bone porosity evaluation, whole fourth lumbar vertebrae (LV4) were removed. LV4 were also used to measure bone mechanical propeties. Left femurs were used for bone histology. Propranolol showed a significant decrease in bone porosity in comparison to OVX control. Moreover, propranolol sig- nificantly improved bone mechanical properties and bone quality when compared with OVX control. The osteoprotective effect of propranolol was comparable with zoledronic acid and alfacalcidol. Based on this comparative study, the results strongly suggest that propranolol might be new therapeutic intervention for the management of postmenopausal osteoporosis in humans.