142 resultados para correction
Resumo:
The uplift resistance of pipelines buried in sands, in the presence of inclined groundwater flow, considering both upward and downward flow directions, has been determined by using the lower bound finite elements limit analysis in conjunction with nonlinear optimization. A correction factor (f (gamma) ), which needs to be multiplied with the uplift factor (F (gamma) ), has been computed to account for groundwater seepage. The variation of f (gamma) has been obtained as a function of i(gamma (w) /gamma (sub) ) for different horizontal inclinations (theta) of groundwater flow; where i = absolute magnitude of hydraulic gradient along the direction of flow, gamma (w) is the unit weight of water and gamma (sub) is the submerged unit weight of soil mass. For a given magnitude of i, there exists a certain critical value of theta for which the magnitude of f (gamma) becomes the minimum. An example has also been presented to illustrate the application of the results obtained for designing pipelines in presence of groundwater seepage.
Resumo:
We compute the logarithmic correction to black hole entropy about exponentially suppressed saddle points of the Quantum Entropy Function corresponding to Z(N) orbifolds of the near horizon geometry of the extremal black hole under study. By carefully accounting for zero mode contributions we show that the logarithmic contributions for quarter-BPS black holes in N = 4 supergravity and one-eighth BPS black holes in N = 8 supergravity perfectly match with the prediction from the microstate counting. We also find that the logarithmic contribution for half-BPS black holes in N = 2 supergravity depends non-trivially on the Z(N) orbifold. Our analysis draws heavily on the results we had previously obtained for heat kernel coefficients on Z(N) orbifolds of spheres and hyperboloids in arXiv:1311.6286 and we also propose a generalization of the Plancherel formula to Z(N) orbifolds of hyperboloids to an expression involving the Harish-Chandra character of sl (2, R), a result which is of possible mathematical interest.
Resumo:
Diffusion couple experiments are conducted to study phase evolutions in the Co-rich part of the Co-Ni-Ta phase diagram. This helps to examine the available phase diagram and propose a correction on the stability of the Co2Ta phase based on the compositional measurements and X-ray analysis. The growth rate of this phase decreases with an increase in Ni content. The same is reflected on the estimated integrated interdiffusion coefficients of the components in this phase. The possible reasons for this change are discussed based on the discussions of defects, crystal structure and the driving forces for diffusion. Diffusion rate of Co in the Co2Ta phase at the Co-rich composition is higher because of more number of Co-Co bonds present compared to that of Ta-Ta bonds and the presence of Co antisites for the deviation from the stoichiometry. The decrease in the diffusion coefficients because of Ni addition indicates that Ni preferably replaces Co antisites to decrease the diffusion rate. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Accuracy in tree woody growth estimates is important to global carbon budget estimation and climate-change science. Tree growth in permanent sampling plots (PSPs) is commonly estimated by measuring stem diameter changes, but this method is susceptible to bias resulting from water-induced reversible stem shrinkage. In the absence of bias correction, temporal variability in growth is likely to be overestimated and incorrectly attributed to fluctuations in resource availability, especially in forests with high seasonal and inter-annual variability in water. We propose and test a novel approach for estimating and correcting this bias at the community level. In a 50-ha PSP from a seasonally dry tropical forest in southern India, where tape measurements have been taken every four years from 1988 to 2012, for nine trees we estimated bias due to reversible stem shrinkage as the difference between woody growth measured using tree rings and that estimated from tape. We tested if the bias estimated from these trees could be used as a proxy to correct bias in tape-based growth estimates at the PSP scale. We observed significant shrinkage-related bias in the growth estimates of the nine trees in some censuses. This bias was strongly linearly related to tape-based growth estimates at the level of the PSP, and could be used as a proxy. After bias was corrected, the temporal variance in growth rates of the PSP decreased, while the effect of exceptionally dry or wet periods was retained, indicating that at least a part of the temporal variability arose from reversible shrinkage-related bias. We also suggest that the efficacy of the bias correction could be improved by measuring the proxy on trees that belong to different size classes and census timing, but not necessarily to different species. Our approach allows for reanalysis - and possible reinterpretation of temporal trends in tree growth, above ground biomass change, or carbon fluxes in forests, and their relationships with resource availability in the context of climate change. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.
Resumo:
Purpose: Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. Methods: The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at the inhomogeneities. Jacob's ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. Results: The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. Conclusions: The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results. (C) 2015 American Association of Physicists in Medicine.
Resumo:
The explanation of resonance given in IEEE Std C57.149-2012 to define resonance during frequency response analysis (FRA) measurements on transformers implicitly uses the conditions prevalent during resonance in a series R-L-C circuit. This dependence is evident from the two assertions made in the definition, viz., resulting in zero net reactive impedance, and, accompanied by a zero value appearing in the phase angle of the frequency response function. These two conditions are satisfied (at resonance) only in a series R-L-C circuit and certainly not in a transformer, as has been assumed in the Standard. This can be proved by considering a ladder-network model. Circuit analysis of this ladder network reveals the origin of this fallacy and proves that, at resonance, neither is the ladder network purely resistive and nor is the phase angle (between input voltage and input current) always zero. Also, during FRA measurements, it is often seen that phase angle does not traverse the conventional cyclic path from +90 degrees to -90 degrees (or vice versa) at all resonant frequencies. This peculiar feature can also be explained using pole-zero maps. Simple derivations, simulations and experimental results on an actual winding are presented. In summary, authors believe that this study dispels existing misconceptions about definition of FRA resonance and provides material for its correction in IEEE Std C57.149-2012. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Using the positivity of relative entropy arising from the Ryu-Takayanagi formula for spherical entangling surfaces, we obtain constraints at the nonlinear level for the gravitational dual. We calculate the Green's function necessary to compute the first order correction to the entangling surface and use this to find the relative entropy for non-constant stress tensors in a derivative expansion. We show that the Einstein value satisfies the positivity condition, while the multidimensional parameter space away from it gets constrained.
Resumo:
A deformable mirror (DM) is an important component of an adaptive optics system. It is known that an on-axis spherical/parabolic optical component, placed at an angle to the incident beam introduces defocus as well as astigmatism in the image plane. Although the former can be compensated by changing the focal plane position, the latter cannot be removed by mere optical realignment. Since the DM is to be used to compensate a turbulence-induced curvature term in addition to other aberrations, it is necessary to determine the aberrations induced by such (curved DM surface) an optical element when placed at an angle (other than 0 deg) of incidence in the optical path. To this effect, we estimate to a first order the aberrations introduced by a DM as a function of the incidence angle and deformation of the DM surface. We record images using a simple setup in which the incident beam is reflected by a 37 channel micro-machined membrane deformable mirror for various angles of incidence. It is observed that astigmatism is a dominant aberration, which was determined by measuring the difference between the tangential and sagittal focal planes. We justify our results on the basis of theoretical simulations and discuss the feasibility of using such a system for adaptive optics considering a trade-off between wavefront correction and astigmatism due to deformation. (C) 2015 Optical Society of America
Resumo:
In the present paper, based on the principles of gauge/gravity duality we analytically compute the shear viscosity to entropy (eta/s) ratio corresponding to the super fluid phase in Einstein Gauss-Bonnet gravity. From our analysis we note that the ratio indeed receives a finite temperature correction below certain critical temperature (T < T-c). This proves the non universality of eta/s ratio in higher derivative theories of gravity. We also compute the upper bound for the Gauss-Bonnet coupling (lambda) corresponding to the symmetry broken phase and note that the upper bound on the coupling does not seem to change as long as we are close to the critical point of the phase diagram. However the corresponding lower bound of the eta/s ratio seems to get modified due to the finite temperature effects.
Resumo:
A new stabilization scheme, based on a stochastic representation of the discretized field variables, is proposed with a view to reduce or even eliminate unphysical oscillations in the mesh-free numerical simulations of systems developing shocks or exhibiting localized bands of extreme deformation in the response. The origin of the stabilization scheme may be traced to nonlinear stochastic filtering and, consistent with a class of such filters, gain-based additive correction terms are applied to the simulated solution of the system, herein achieved through the element-free Galerkin method, in order to impose a set of constraints that help arresting the spurious oscillations. The method is numerically illustrated through its Applications to inviscid Burgers' equations, wherein shocks may develop as a result of intersections of the characteristics, and to a gradient plasticity model whose response is often characterized by a developing shear band as the external load is gradually increased. The potential of the method in stabilized yet accurate numerical simulations of such systems involving extreme gradient variations in the response is thus brought forth. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We investigated the nature of the cohesive energy between graphane sheets via multiple CH center dot center dot center dot HC interactions, using density functional theory (DFT) including dispersion correction (Grimmes D3 approach) computations of n]graphane sigma dimers (n = 6-73). For comparison, we also evaluated the binding between graphene sheets that display prototypical pi/pi interactions. The results were analyzed using the block-localized wave function (BLW) method, which is a variant of ab initio valence bond (VB) theory. BLW interprets the intermolecular interactions in terms of frozen interaction energy (Delta E-F) composed of electrostatic and Pauli repulsion interactions, polarization (Delta E-pol), charge-transfer interaction (Delta E-CT), and dispersion effects (Delta E-disp). The BLW analysis reveals that the cohesive energy between graphane sheets is dominated by two stabilizing effects, namely intermolecular London dispersion and two-way charge transfer energy due to the sigma CH -> sigma*(HC) interactions. The shift of the electron density around the nonpolar covalent C-H bonds involved in the intermolecular interaction decreases the C-H bond lengths uniformly by 0.001 angstrom. The Delta E-CT term, which accounts for similar to 15% of the total binding energy, results in the accumulation of electron density in the interface area between two layers. This accumulated electron density thus acts as an electronic glue for the graphane layers and constitutes an important driving force in the self-association and stability of graphane under ambient conditions. Similarly, the double faced adhesive tape style of charge transfer interactions was also observed among graphene sheets in which it accounts for similar to 18% of the total binding energy. The binding energy between graphane sheets is additive and can be expressed as a sum of CH center dot center dot center dot HC interactions, or as a function of the number of C-H bonds.
Resumo:
A recent approach for the construction of constant dimension subspace codes, designed for error correction in random networks, is to consider the codes as orbits of suitable subgroups of the general linear group. In particular, a cyclic orbit code is the orbit of a cyclic subgroup. Hence a possible method to construct large cyclic orbit codes with a given minimum subspace distance is to select a subspace such that the orbit of the Singer subgroup satisfies the distance constraint. In this paper we propose a method where some basic properties of difference sets are employed to select such a subspace, thereby providing a systematic way of constructing cyclic orbit codes with specified parameters. We also present an explicit example of such a construction.
Resumo:
We study the canted magnetic state in Sr2IrO4 using fully relativistic density functional theory (DFT) including an on-site Hubbard U correction. A complete magnetic phase diagram with respect to the tetragonal distortion and the rotation of IrO6 octahedra is constructed, revealing the presence of two types of canted to collinear magnetic transitions: a spin-flop transition with increasing tetragonal distortion and a complete quenching of the basal weak ferromagnetic moment below a critical octahedral rotation. Moreover, we put forward a scheme to study the anisotropic magnetic couplings by mapping magnetically constrained noncollinear DFT onto a general spin Hamiltonian. This procedure allows for the simultaneous account and direct control of the lattice, spin, and orbital interactions within a fully ab initio scheme. We compute the isotropic, single site anisotropy and Dzyaloshinskii-Moriya (DM) coupling parameters, and clarify that the origin of the canted magnetic state in Sr2IrO4 arises from the structural distortions and the competition between isotropic exchange and DM interactions.
Resumo:
The set of all subspaces of F-q(n) is denoted by P-q(n). The subspace distance d(S)(X, Y) = dim(X) + dim(Y)-2dim(X boolean AND Y) defined on P-q(n) turns it into a natural coding space for error correction in random network coding. A subset of P-q(n) is called a code and the subspaces that belong to the code are called codewords. Motivated by classical coding theory, a linear coding structure can be imposed on a subset of P-q(n). Braun et al. conjectured that the largest cardinality of a linear code, that contains F-q(n), is 2(n). In this paper, we prove this conjecture and characterize the maximal linear codes that contain F-q(n).