142 resultados para correction


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of natural convection on the oscillatory flow in an open-ended pipe driven by a timewise sinusoidally varying pressure at one end and subjected to an ambient-to-cryogenic temperature difference across the ends, is numerically studied. Conjugate effects arising out of the interaction of oscillatory flow with heat conduction in the pipe wall are taken into account by considering a finite thickness wall with an insulated exterior surface. Two cases, namely, one with natural convection acting downwards and the other, with natural convection acting upwards, are considered. The full set of compressible flow equations with axissymmetry are solved using a pressure correction algorithm. Parametric studies are conducted with frequencies in the range 5-15 Hz for an end-to-end temperature difference of 200 and 50 K. Results are obtained for the variation of velocity, temperature. Nusselt number and the phase relationship between mass flow rate and temperature. It is found that the Rayleigh number has a minimal effect on the time averaged Nusselt number and phase angle. However, it does influence the local variation of velocity and Nusselt number over one cycle. The natural convection and pressure amplitude have influence on the energy flow through the gas and solid. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Silicate Weathering Rate (SWR) and associated Carbon dioxide Consumption Rate (CCR) in tropical silicate terrain is assessed through a study of the major ion chemistry in a small west flowing river of Peninsular India, the Nethravati River. The specific features of the river basin are high mean annual rainfall and temperature, high runoff and a Precambrian basement composed of granitic-gneiss, charnockite and minor metasediments. The water samples (n = 56) were collected from three locations along the Nethravati River and from two of its tributaries over a period of twelve months. Chemical Weathering Rate (CWR) for the entire watershed is calculated by applying rainwater correction using river chloride as a tracer. Chemical Weathering Rate in the Nethravati watershed is estimated to 44 t.km(-2).y(-1) encompassing a SWR of 42 t.km(-2).y(-1) and a maximum carbonate contribution of 2 t.km(-2).y(-1). This SWR is among the highest reported for granito-gneissic terrains. The assessed CCR is 2.9 . 10(5) mol.km(-2).y(-1). The weathering index (Re). calculated from molecular ratios of dissolved cations and silica in the river, suggests an intense silicate weathering leading to kaolinite-gibbsite precipitation in the weathering covers. The intense SWR and CCR could be due to the combination of high runoff and temperature along with the thickness and nature of the weathering cover. The comparison of silicate weathering fluxes with other watersheds reveals that under similar morpho-climatic settings basalt weathering would be 2.5 times higher than the granite-gneissic rocks. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have investigated quadratic nonlinearity (beta(HRS)) and linear and circular depolarization ratios (D and D', respectively) of a series of 1:1 complexes of tropyliumtetrafluoroborate as a cation and methyl-substituted benzenes as pi-donors by making polarization resolved hyper-Rayleigh scattering measurements in solution. The measured D and D' values are much lower than the values expected from a typical sandwich or a T-shaped geometry of a complex. In the cation-pi complexes studied here, the D value varies from 1.36 to 1.46 and D' from 1.62 to 1.72 depending on the number of methyl substitutions on the benzene ring. In order to probe it further, beta, D and D' were computed using the Zerner intermediate neglect of differential overlap-correction vector self-consistent reaction field technique including single and double configuration interactions in the absence and presence of BF4- anion. In the absence of the anion, the calculated value of D varies from 4.20 to 4.60 and that of D' from 2.45 to 2.72 which disagree with experimental values. However, by arranging three cation-pi BF4- complexes in a trigonal symmetry, the computed values are brought to agreement with experiments. When such an arrangement was not considered, the calculated beta values were lower than the experimental values by more than a factor of two. This unprecedented influence of the otherwise ``unimportant'' anion in solution on the beta value and depolarization ratios of these cation-pi complexes is highlighted and emphasized in this paper. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4716020]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the thermal vibration analysis of orthotropic nanoplates such as graphene, using the two variable refined plate theory and nonlocal continuum mechanics for small scale effects. The nanoplate is modeled based on two variable refined plate theory and the axial stress caused by the thermal effects is also considered. The two variable refined plate theory takes account of transverse shear effects and parabolic distribution of the transverse shear strains through the thickness of the plate, hence it is unnecessary to use shear correction factors. Nonlocal governing equations of motion for the nanoplate are derived from the principle of virtual displacements. The closed form solution for thermal-vibration frequencies of a simply supported rectangular nanoplate has been obtained by using Navier's method of solution. Numerical results obtained by the present theory are compared with available solutions in the literature and the molecular dynamics results. The influences of the small scale coefficient, the room or low temperature, the high temparature, the half wave number and the aspect ratio of nanoplate on the natural frequencies are considered and discussed in detail. It can be concluded that the present theory, which does not require shear correction factor, is not only simple but also comparable to the first-order and higher order shear deformation theory. The present analysis results can be used for the design of the next generation of nanodevices that make use of the thermal vibration properties of the nanoplates. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the role of the higher-order evanescent modes generated at the area discontinuities in the acoustic attenuation characteristics of an elliptical end-chamber muffler with an end-offset inlet and end-centered outlet. It has been observed that with an increase in length, the muffler undergoes a transition from being acoustically short to acoustically long. Short end chambers and long end chambers are characterized by transverse plane waves and axial plane waves, respectively, in the low-frequency range. The nondimensional frequency limit k(0)(D-1/2) or k(0)R(0) as well as the chamber length to inlet/outlet pipe diameter ratio, i.e., L/d(0), up to which the muffler behaves like a short chamber and the corresponding limit beyond which the muffler is acoustically long are determined. The limits between which neither the transverse plane-wave model nor the conventional axial plane-wave model gives a satisfactory prediction have also been determined, the region being called the intermediate range. The end-correction expression for this muffler configuration in the acoustically long limit has been obtained using 3-D FEA carried on commercial software, covering most of the dimension range used in the design exercise. Development of a method of combining the transverse plane wave model with the axial plane wave model using the impedance Z] matrix is another noteworthy contribution of this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6 `' angular resolution and 72 mu Jy beam(-1) rms noise. The images (centered at R. A. 00(h)35(m)00(s), decl. -67 degrees 00'00 `' and R. A. 00(h)59(m)17(s), decl. -67.00'00 `', J2000 epoch) cover 8.42 deg(2) sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50 `'. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many wireless sensor network(WSN) applications which require reliable data transfer between the nodes. Several techniques including link level retransmission, error correction methods and hybrid Automatic Repeat re- Quest(ARQ) were introduced into the wireless sensor networks for ensuring reliability. In this paper, we use Automatic reSend request(ASQ) technique with regular acknowledgement to design reliable end-to-end communication protocol, called Adaptive Reliable Transport(ARTP) protocol, for WSNs. Besides ensuring reliability, objective of ARTP protocol is to ensure message stream FIFO at the receiver side instead of the byte stream FIFO used in TCP/IP protocol suite. To realize this objective, a new protocol stack has been used in the ARTP protocol. The ARTP protocol saves energy without affecting the throughput by sending three different types of acknowledgements, viz. ACK, NACK and FNACK with semantics different from that existing in the literature currently and adapting to the network conditions. Additionally, the protocol controls flow based on the receiver's feedback and congestion by holding ACK messages. To the best of our knowledge, there has been little or no attempt to build a receiver controlled regularly acknowledged reliable communication protocol. We have carried out extensive simulation studies of our protocol using Castalia simulator, and the study shows that our protocol performs better than related protocols in wireless/wire line networks, in terms of throughput and energy efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The possibility of establishing an accurate relative chronology of the early solar system events based on the decay of short-lived Al-26 to Mg-26 (half-life of 0.72 Myr) depends on the level of homogeneity (or heterogeneity) of Al-26 and Mg isotopes. However, this level is difficult. to constrain precisely because of the very high precision needed for the determination of isotopic ratios, typically of +/- 5 ppm. In this study, we report for the first time a detailed analytical protocol developed for high precision in situ Mg isotopic measurements ((25)mg/(24)mg and (26)mg/Mg-24 ratios, as well as Mg-26 excess) by MC-SIMS. As the data reduction process is critical for both accuracy and precision of the final isotopic results, factors such as the Faraday cup (FC) background drift and matrix effects on instrumental fractionation have been investigated. Indeed these instrumental effects impacting the measured Mg-isotope ratios can be as large or larger than the variations we are looking for to constrain the initial distribution of Al-26 and Mg isotopes in the early solar system. Our results show that they definitely are limiting factors regarding the precision of Mg isotopic compositions, and that an under- or over-correction of both FC background instabilities and instrumental isotopic fractionation leads to important bias on delta Mg-25, delta(26)mg and Delta Mg-26 values (for example, olivines not corrected for FC background drifts display Delta Mg-26 values that can differ by as much as 10 ppm from the truly corrected value). The new data reduction process described here can then be applied to meteoritic samples (components of chondritic meteorites for instance) to accurately establish their relative chronology of formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.