943 resultados para deferred correction


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6 `' angular resolution and 72 mu Jy beam(-1) rms noise. The images (centered at R. A. 00(h)35(m)00(s), decl. -67 degrees 00'00 `' and R. A. 00(h)59(m)17(s), decl. -67.00'00 `', J2000 epoch) cover 8.42 deg(2) sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50 `'. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many wireless sensor network(WSN) applications which require reliable data transfer between the nodes. Several techniques including link level retransmission, error correction methods and hybrid Automatic Repeat re- Quest(ARQ) were introduced into the wireless sensor networks for ensuring reliability. In this paper, we use Automatic reSend request(ASQ) technique with regular acknowledgement to design reliable end-to-end communication protocol, called Adaptive Reliable Transport(ARTP) protocol, for WSNs. Besides ensuring reliability, objective of ARTP protocol is to ensure message stream FIFO at the receiver side instead of the byte stream FIFO used in TCP/IP protocol suite. To realize this objective, a new protocol stack has been used in the ARTP protocol. The ARTP protocol saves energy without affecting the throughput by sending three different types of acknowledgements, viz. ACK, NACK and FNACK with semantics different from that existing in the literature currently and adapting to the network conditions. Additionally, the protocol controls flow based on the receiver's feedback and congestion by holding ACK messages. To the best of our knowledge, there has been little or no attempt to build a receiver controlled regularly acknowledged reliable communication protocol. We have carried out extensive simulation studies of our protocol using Castalia simulator, and the study shows that our protocol performs better than related protocols in wireless/wire line networks, in terms of throughput and energy efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The possibility of establishing an accurate relative chronology of the early solar system events based on the decay of short-lived Al-26 to Mg-26 (half-life of 0.72 Myr) depends on the level of homogeneity (or heterogeneity) of Al-26 and Mg isotopes. However, this level is difficult. to constrain precisely because of the very high precision needed for the determination of isotopic ratios, typically of +/- 5 ppm. In this study, we report for the first time a detailed analytical protocol developed for high precision in situ Mg isotopic measurements ((25)mg/(24)mg and (26)mg/Mg-24 ratios, as well as Mg-26 excess) by MC-SIMS. As the data reduction process is critical for both accuracy and precision of the final isotopic results, factors such as the Faraday cup (FC) background drift and matrix effects on instrumental fractionation have been investigated. Indeed these instrumental effects impacting the measured Mg-isotope ratios can be as large or larger than the variations we are looking for to constrain the initial distribution of Al-26 and Mg isotopes in the early solar system. Our results show that they definitely are limiting factors regarding the precision of Mg isotopic compositions, and that an under- or over-correction of both FC background instabilities and instrumental isotopic fractionation leads to important bias on delta Mg-25, delta(26)mg and Delta Mg-26 values (for example, olivines not corrected for FC background drifts display Delta Mg-26 values that can differ by as much as 10 ppm from the truly corrected value). The new data reduction process described here can then be applied to meteoritic samples (components of chondritic meteorites for instance) to accurately establish their relative chronology of formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen bonded complexes formed between the square pyramidal Fe(CO)(5) with HX (X = F, Cl, Br), showing X-H center dot center dot center dot Fe interactions, have been investigated theoretically using density functional theory (DFT) including dispersion correction. Geometry, interaction energy, and large red shift of about 400 cm(-1) in the FIX stretching frequency confirm X-H center dot center dot center dot Fe hydrogen bond formation. In the (CO)(5)Fe center dot center dot center dot HBr complex, following the significant red shift, the HBr stretching mode is coupled with the carbonyl stretching modes. This clearly affects the correlation between frequency shift and binding energy, which is a hallmark of hydrogen bonds. Atoms in Molecule (AIM) theoretical analyses show the presence of a bond critical point between the iron and the hydrogen of FIX and significant mutual penetration. These X-H center dot center dot center dot Fe hydrogen bonds follow most but not all of the eight criteria proposed by Koch and Popelier (J. Phys. Chem. 1995, 99, 9747) based on their investigations on C-H center dot center dot center dot O hydrogen bonds. Natural bond orbital (NBO) analysis indicates charge transfer from the organometallic system to the hydrogen bond donor. However, there is no correlation between the extent of charge transfer and interaction,energy, contrary to what is proposed in the recent IUPAC recommendation (Pure Appl.. Chem. 2011, 83, 1637). The ``hydrogen bond radius'' for iron has been determined to be 1.60 +/- 0.02 angstrom, and not surprisingly it is between the covalent (127 angstrom) and van der Waals (2.0) radii of Fe. DFT and AIM theoretical studies reveal that Fe in square pyramidal Fe(CO)(5) can also form halogen bond with CIF and ClH as ``halogen bond donor''. Both these complexes show mutual penetration as well, though the Fe center dot center dot center dot Cl distance is closer to the sum of van der Waals radii of Fe and Cl in (CO)5Fe center dot center dot center dot ClH, and it is about 1 angstrom less in (CO)(5)Fe center dot center dot center dot ClF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ultimate bearing capacity of strip foundations in the presence of inclined groundwater flow, considering both upward and downward flow directions, has been determined by using the lower bound finite-element limit analysis. A numerical solution has been generated for both smooth and rough footings placed on frictional soils. A correction factor (f gamma), which needs to be multiplied with the N gamma-term, has been computed to account for groundwater seepage. The variation of f gamma has been obtained as a function of the hydraulic gradient (i) for various inclinations of groundwater flow. For a given magnitude of i, there exists a certain critical inclination of the flow for which the value of f gamma is minimized. With an upward flow, for all flow inclinations, the magnitude of f gamma always reduces with an increase in the value of i. An example has also been provided to illustrate the application of the obtained results when designing foundations in the presence of groundwater seepage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Girsanov linearization method (GLM), proposed earlier in Saha, N., and Roy, D., 2007, ``The Girsanov Linearisation Method for Stochastically Driven Nonlinear Oscillators,'' J. Appl. Mech., 74, pp. 885-897, is reformulated to arrive at a nearly exact, semianalytical, weak and explicit scheme for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the reformulated linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories while weakly applying the Girsanov correction (the Radon-Nikodym derivative) for the linearization errors. The semianalyticity is due to an explicit linearization of the nonlinear drift terms and it plays a crucial role in keeping the Radon-Nikodym derivative ``nearly bounded'' above by the inverse of the linearization time step (which means that only a subset of linearized trajectories with low, yet finite, probability exceeds this bound). Drift linearization is conveniently accomplished via the first few (lower order) terms in the associated stochastic (Ito) Taylor expansion to exclude (multiple) stochastic integrals from the numerical treatment. Similarly, the Radon-Nikodym derivative, which is a strictly positive, exponential (super-) martingale, is converted to a canonical form and evaluated over each time step without directly computing the stochastic integrals appearing in its argument. Through their numeric implementations for a few low-dimensional nonlinear oscillators, the proposed variants of the scheme, presently referred to as the Girsanov corrected linearization method (GCLM), are shown to exhibit remarkably higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental study of a small partial admission axial turbine with low aspect ratio blade has been done. Tests were also performed with full admission stator replacing the partial one for the same rotor to assess the losses occurring due to partial admission. Further tests were conducted with stator admission area split into two and three sectors to study the effects of multiple admission sectors. The method of Ainley and Mathieson with suitable correction for aspect ratio in secondary losses, as proposed by Kacker and Okapuu, gives a good estimate of the efficiency. Estimates of partial admission losses are made and compared with experimentally observed values. The Suter and Traupel correlations for partial admission losses yielded reasonably accurate estimates of efficiency even for small turbines though limited to the region of design u/c(is). Stenning's original concept of expansion losses in a single sector is extended to include multiple sectors of opening. The computed efficiency debit due to each additional sector opened is compared with test values. The agreement is observed to be good. This verified Stenning's original concept of expansion losses. When the expression developed on this extended concept is modified by a correction factor, the prediction of partial admission efficiencies is nearly as good as that of Suter and Traupel. Further, performance benefits accrue if the turbine is configured with increased aspect ratio at the expense of reduced partial admission.