910 resultados para Correction
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.
Resumo:
Hydrogen bonded complexes formed between the square pyramidal Fe(CO)(5) with HX (X = F, Cl, Br), showing X-H center dot center dot center dot Fe interactions, have been investigated theoretically using density functional theory (DFT) including dispersion correction. Geometry, interaction energy, and large red shift of about 400 cm(-1) in the FIX stretching frequency confirm X-H center dot center dot center dot Fe hydrogen bond formation. In the (CO)(5)Fe center dot center dot center dot HBr complex, following the significant red shift, the HBr stretching mode is coupled with the carbonyl stretching modes. This clearly affects the correlation between frequency shift and binding energy, which is a hallmark of hydrogen bonds. Atoms in Molecule (AIM) theoretical analyses show the presence of a bond critical point between the iron and the hydrogen of FIX and significant mutual penetration. These X-H center dot center dot center dot Fe hydrogen bonds follow most but not all of the eight criteria proposed by Koch and Popelier (J. Phys. Chem. 1995, 99, 9747) based on their investigations on C-H center dot center dot center dot O hydrogen bonds. Natural bond orbital (NBO) analysis indicates charge transfer from the organometallic system to the hydrogen bond donor. However, there is no correlation between the extent of charge transfer and interaction,energy, contrary to what is proposed in the recent IUPAC recommendation (Pure Appl.. Chem. 2011, 83, 1637). The ``hydrogen bond radius'' for iron has been determined to be 1.60 +/- 0.02 angstrom, and not surprisingly it is between the covalent (127 angstrom) and van der Waals (2.0) radii of Fe. DFT and AIM theoretical studies reveal that Fe in square pyramidal Fe(CO)(5) can also form halogen bond with CIF and ClH as ``halogen bond donor''. Both these complexes show mutual penetration as well, though the Fe center dot center dot center dot Cl distance is closer to the sum of van der Waals radii of Fe and Cl in (CO)5Fe center dot center dot center dot ClH, and it is about 1 angstrom less in (CO)(5)Fe center dot center dot center dot ClF.
Resumo:
The ultimate bearing capacity of strip foundations in the presence of inclined groundwater flow, considering both upward and downward flow directions, has been determined by using the lower bound finite-element limit analysis. A numerical solution has been generated for both smooth and rough footings placed on frictional soils. A correction factor (f gamma), which needs to be multiplied with the N gamma-term, has been computed to account for groundwater seepage. The variation of f gamma has been obtained as a function of the hydraulic gradient (i) for various inclinations of groundwater flow. For a given magnitude of i, there exists a certain critical inclination of the flow for which the value of f gamma is minimized. With an upward flow, for all flow inclinations, the magnitude of f gamma always reduces with an increase in the value of i. An example has also been provided to illustrate the application of the obtained results when designing foundations in the presence of groundwater seepage.
Resumo:
The Girsanov linearization method (GLM), proposed earlier in Saha, N., and Roy, D., 2007, ``The Girsanov Linearisation Method for Stochastically Driven Nonlinear Oscillators,'' J. Appl. Mech., 74, pp. 885-897, is reformulated to arrive at a nearly exact, semianalytical, weak and explicit scheme for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the reformulated linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories while weakly applying the Girsanov correction (the Radon-Nikodym derivative) for the linearization errors. The semianalyticity is due to an explicit linearization of the nonlinear drift terms and it plays a crucial role in keeping the Radon-Nikodym derivative ``nearly bounded'' above by the inverse of the linearization time step (which means that only a subset of linearized trajectories with low, yet finite, probability exceeds this bound). Drift linearization is conveniently accomplished via the first few (lower order) terms in the associated stochastic (Ito) Taylor expansion to exclude (multiple) stochastic integrals from the numerical treatment. Similarly, the Radon-Nikodym derivative, which is a strictly positive, exponential (super-) martingale, is converted to a canonical form and evaluated over each time step without directly computing the stochastic integrals appearing in its argument. Through their numeric implementations for a few low-dimensional nonlinear oscillators, the proposed variants of the scheme, presently referred to as the Girsanov corrected linearization method (GCLM), are shown to exhibit remarkably higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own.
Resumo:
Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.
Resumo:
Experimental study of a small partial admission axial turbine with low aspect ratio blade has been done. Tests were also performed with full admission stator replacing the partial one for the same rotor to assess the losses occurring due to partial admission. Further tests were conducted with stator admission area split into two and three sectors to study the effects of multiple admission sectors. The method of Ainley and Mathieson with suitable correction for aspect ratio in secondary losses, as proposed by Kacker and Okapuu, gives a good estimate of the efficiency. Estimates of partial admission losses are made and compared with experimentally observed values. The Suter and Traupel correlations for partial admission losses yielded reasonably accurate estimates of efficiency even for small turbines though limited to the region of design u/c(is). Stenning's original concept of expansion losses in a single sector is extended to include multiple sectors of opening. The computed efficiency debit due to each additional sector opened is compared with test values. The agreement is observed to be good. This verified Stenning's original concept of expansion losses. When the expression developed on this extended concept is modified by a correction factor, the prediction of partial admission efficiencies is nearly as good as that of Suter and Traupel. Further, performance benefits accrue if the turbine is configured with increased aspect ratio at the expense of reduced partial admission.
Resumo:
An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.
Resumo:
The ultimate bearing capacity of strip foundations subjected to horizontal groundwater flow has been computed by making use of the stress characteristics method which is well known for its capability in solving quite accurately different stability problems in geotechnical engineering. The numerical solution has been generated both for smooth and rough footings placed on frictional soils. A correction factor (fγ) associated with Nγ term, to account for the existence of ground water flow, has been introduced. The variation of fγ has been obtained as a function of hydraulic gradient (i) for different values of soil frictional angle. The magnitude of fγ reduces continuously with an increase in the value of i.
Resumo:
General circulation models (GCMs) use transient climate simulations to predict climate conditions in the future. Coarse-grid resolutions and process uncertainties necessitate the use of downscaling models to simulate precipitation. However, in the downscaling models, with multiple GCMs now available, selecting an atmospheric variable from a particular model which is representative of the ensemble mean becomes an important consideration. The variable convergence score (VCS) provides a simple yet meaningful approach to address this issue, providing a mechanism to evaluate variables against each other with respect to the stability they exhibit in future climate simulations. In this study, VCS methodology is applied to 10 atmospheric variables of particular interest in downscaling precipitation over India and also on a regional basis. The nested bias-correction methodology is used to remove the systematic biases in the GCMs simulations, and a single VCS curve is developed for the entire country. The generated VCS curve is expected to assist in quantifying the variable performance across different GCMs, thus reducing the uncertainty in climate impact-assessment studies. The results indicate higher consistency across GCMs for pressure and temperature, and lower consistency for precipitation and related variables. Regional assessments, while broadly consistent with the overall results, indicate low convergence in atmospheric attributes for the Northeastern parts of India.
Resumo:
Tetracene is an important conjugated molecule for device applications. We have used the diagrammatic valence bond method to obtain the desired states, in a Hilbert space of about 450 million singlets and 902 million triplets. We have also studied the donor/acceptor (D/A)-substituted tetracenes with D and A groups placed symmetrically about the long axis of the molecule. In these cases, by exploiting a new symmetry, which is a combination of C-2 symmetry and electron-hole symmetry, we are able to obtain their low-lying states. In the case of substituted tetracene, we find that optically allowed one-photon excitation gaps reduce with increasing D/A strength, while the lowest singlet triplet gap is only wealdy affected. In all the systems we have studied, the excited singlet state, S-i, is at more than twice the energy of the lowest triplet state and the second triplet is very close to the S-1 state. Thus, donor-acceptor-substituted tetracene could be a good candidate in photovoltaic device application as it satisfies energy criteria for singlet fission. We have also obtained the model exact second harmonic generation (SHG) coefficients using the correction vector method, and we find that the SHG responses increase with the increase in D/A strength.
Resumo:
We consider free fermion and free boson CFTs in two dimensions, deformed by a chemical potential mu for the spin-three current. For the CFT on the infinite spatial line, we calculate the finite temperature entanglement entropy of a single interval perturbatively to second order in mu in each of the theories. We find that the result in each case is given by the same non-trivial function of temperature and interval length. Remarkably, we further obtain the same formula using a recent Wilson line proposal for the holographic entanglement entropy, in holomorphically factorized form, associated to the spin-three black hole in SL(3, R) x SL(3, R) Chern-Simons theory. Our result suggests that the order mu(2) correction to the entanglement entropy may be universal for W-algebra CFTs with spin-three chemical potential, and constitutes a check of the holographic entanglement entropy proposal for higher spin theories of gravity in AdS(3).