910 resultados para Vignetting Correction


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ultimate bearing capacity of strip foundations subjected to horizontal groundwater flow has been computed by making use of the stress characteristics method which is well known for its capability in solving quite accurately different stability problems in geotechnical engineering. The numerical solution has been generated both for smooth and rough footings placed on frictional soils. A correction factor (fγ) associated with Nγ term, to account for the existence of ground water flow, has been introduced. The variation of fγ has been obtained as a function of hydraulic gradient (i) for different values of soil frictional angle. The magnitude of fγ reduces continuously with an increase in the value of i.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

General circulation models (GCMs) use transient climate simulations to predict climate conditions in the future. Coarse-grid resolutions and process uncertainties necessitate the use of downscaling models to simulate precipitation. However, in the downscaling models, with multiple GCMs now available, selecting an atmospheric variable from a particular model which is representative of the ensemble mean becomes an important consideration. The variable convergence score (VCS) provides a simple yet meaningful approach to address this issue, providing a mechanism to evaluate variables against each other with respect to the stability they exhibit in future climate simulations. In this study, VCS methodology is applied to 10 atmospheric variables of particular interest in downscaling precipitation over India and also on a regional basis. The nested bias-correction methodology is used to remove the systematic biases in the GCMs simulations, and a single VCS curve is developed for the entire country. The generated VCS curve is expected to assist in quantifying the variable performance across different GCMs, thus reducing the uncertainty in climate impact-assessment studies. The results indicate higher consistency across GCMs for pressure and temperature, and lower consistency for precipitation and related variables. Regional assessments, while broadly consistent with the overall results, indicate low convergence in atmospheric attributes for the Northeastern parts of India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tetracene is an important conjugated molecule for device applications. We have used the diagrammatic valence bond method to obtain the desired states, in a Hilbert space of about 450 million singlets and 902 million triplets. We have also studied the donor/acceptor (D/A)-substituted tetracenes with D and A groups placed symmetrically about the long axis of the molecule. In these cases, by exploiting a new symmetry, which is a combination of C-2 symmetry and electron-hole symmetry, we are able to obtain their low-lying states. In the case of substituted tetracene, we find that optically allowed one-photon excitation gaps reduce with increasing D/A strength, while the lowest singlet triplet gap is only wealdy affected. In all the systems we have studied, the excited singlet state, S-i, is at more than twice the energy of the lowest triplet state and the second triplet is very close to the S-1 state. Thus, donor-acceptor-substituted tetracene could be a good candidate in photovoltaic device application as it satisfies energy criteria for singlet fission. We have also obtained the model exact second harmonic generation (SHG) coefficients using the correction vector method, and we find that the SHG responses increase with the increase in D/A strength.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider free fermion and free boson CFTs in two dimensions, deformed by a chemical potential mu for the spin-three current. For the CFT on the infinite spatial line, we calculate the finite temperature entanglement entropy of a single interval perturbatively to second order in mu in each of the theories. We find that the result in each case is given by the same non-trivial function of temperature and interval length. Remarkably, we further obtain the same formula using a recent Wilson line proposal for the holographic entanglement entropy, in holomorphically factorized form, associated to the spin-three black hole in SL(3, R) x SL(3, R) Chern-Simons theory. Our result suggests that the order mu(2) correction to the entanglement entropy may be universal for W-algebra CFTs with spin-three chemical potential, and constitutes a check of the holographic entanglement entropy proposal for higher spin theories of gravity in AdS(3).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite volume methods traditionally employ dimension by dimension extension of the one-dimensional reconstruction and averaging procedures to achieve spatial discretization of the governing partial differential equations on a structured Cartesian mesh in multiple dimensions. This simple approach based on tensor product stencils introduces an undesirable grid orientation dependence in the computed solution. The resulting anisotropic errors lead to a disparity in the calculations that is most prominent between directions parallel and diagonal to the grid lines. In this work we develop isotropic finite volume discretization schemes which minimize such grid orientation effects in multidimensional calculations by eliminating the directional bias in the lowest order term in the truncation error. Explicit isotropic expressions that relate the cell face averaged line and surface integrals of a function and its derivatives to the given cell area and volume averages are derived in two and three dimensions, respectively. It is found that a family of isotropic approximations with a free parameter can be derived by combining isotropic schemes based on next-nearest and next-next-nearest neighbors in three dimensions. Use of these isotropic expressions alone in a standard finite volume framework, however, is found to be insufficient in enforcing rotational invariance when the flux vector is nonlinear and/or spatially non-uniform. The rotationally invariant terms which lead to a loss of isotropy in such cases are explicitly identified and recast in a differential form. Various forms of flux correction terms which allow for a full recovery of rotational invariance in the lowest order truncation error terms, while preserving the formal order of accuracy and discrete conservation of the original finite volume method, are developed. Numerical tests in two and three dimensions attest the superior directional attributes of the proposed isotropic finite volume method. Prominent anisotropic errors, such as spurious asymmetric distortions on a circular reaction-diffusion wave that feature in the conventional finite volume implementation are effectively suppressed through isotropic finite volume discretization. Furthermore, for a given spatial resolution, a striking improvement in the prediction of kinetic energy decay rate corresponding to a general two-dimensional incompressible flow field is observed with the use of an isotropic finite volume method instead of the conventional discretization. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simplified equations are derived for a granular flow in the `dense' limit where the volume fraction is close to that for dynamical arrest, and the `shallow' limit where the stream-wise length for flow development (L) is large compared with the cross-stream height (h). The mass and diameter of the particles are set equal to 1 in the analysis without loss of generality. In the dense limit, the equations are simplified by taking advantage of the power-law divergence of the pair distribution function chi proportional to (phi(ad) - phi)(-alpha), and a faster divergence of the derivativ rho(d chi/d rho) similar to (d chi/d phi), where rho and phi are the density and volume fraction, and phi(ad) is the volume fraction for arrested dynamics. When the height h is much larger than the conduction length, the energy equation reduces to an algebraic balance between the rates of production and dissipation of energy, and the stress is proportional to the square of the strain rate (Bagnold law). In the shallow limit, the stress reduces to a simplified Bagnold stress, where all components of the stress are proportional to (partial derivative u(x)/partial derivative y)(2), which is the cross-stream (y) derivative of the stream-wise (x) velocity. In the simplified equations for dense shallow flows, the inertial terms are neglected in the y momentum equation in the shallow limit because the are O(h/L) smaller than the divergence of the stress. The resulting model contains two equations, a mass conservation equations which reduces to a solenoidal condition on the velocity in the incompressible limit, and a stream-wise momentum equation which contains just one parameter B which is a combination of the Bagnold coefficients and their derivatives with respect to volume fraction. The leading-order dense shallow flow equations, as well as the first correction due to density variations, are analysed for two representative flows. The first is the development from a plug flow to a fully developed Bagnold profile for the flow down an inclined plane. The analysis shows that the flow development length is ((rho) over barh(3)/B) , where (rho) over bar is the mean density, and this length is numerically estimated from previous simulation results. The second example is the development of the boundary layer at the base of the flow when a plug flow (with a slip condition at the base) encounters a rough base, in the limit where the momentum boundary layer thickness is small compared with the flow height. Analytical solutions can be found only when the stream-wise velocity far from the surface varies as x(F), where x is the stream-wise distance from the start of the rough base and F is an exponent. The boundary layer thickness increases as (l(2)x)(1/3) for all values of F, where the length scale l = root 2B/(rho) over bar. The analysis reveals important differences between granular flows and the flows of Newtonian fluids. The Reynolds number (ratio of inertial and viscous terms) turns out to depend only on the layer height and Bagnold coefficients, and is independent of the flow velocity, because both the inertial terms in the conservation equations and the divergence of the stress depend on the square of the velocity/velocity gradients. The compressibility number (ratio of the variation in volume fraction and mean volume fraction) is independent of the flow velocity and layer height, and depends only on the volume fraction and Bagnold coefficients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Onsager model for the secondary flow field in a high-speed rotating cylinder is extended to incorporate the difference in mass of the two species in a binary gas mixture. The base flow is an isothermal solid-body rotation in which there is a balance between the radial pressure gradient and the centrifugal force density for each species. Explicit expressions for the radial variation of the pressure, mass/mole fractions, and from these the radial variation of the viscosity, thermal conductivity and diffusion coefficient, are derived, and these are used in the computation of the secondary flow. For the secondary flow, the mass, momentum and energy equations in axisymmetric coordinates are expanded in an asymptotic series in a parameter epsilon = (Delta m/m(av)), where Delta m is the difference in the molecular masses of the two species, and the average molecular mass m(av) is defined as m(av) = (rho(w1)m(1) + rho(w2)m(2))/rho(w), where rho(w1) and rho(w2) are the mass densities of the two species at the wall, and rho(w) = rho(w1) + rho(w2). The equation for the master potential and the boundary conditions are derived correct to O(epsilon(2)). The leading-order equation for the master potential contains a self-adjoint sixth-order operator in the radial direction, which is different from the generalized Onsager model (Pradhan & Kumaran, J. Fluid Mech., vol. 686, 2011, pp. 109-159), since the species mass difference is included in the computation of the density, viscosity and thermal conductivity in the base state. This is solved, subject to boundary conditions, to obtain the leading approximation for the secondary flow, followed by a solution of the diffusion equation for the leading correction to the species mole fractions. The O(epsilon) and O(epsilon(2)) equations contain inhomogeneous terms that depend on the lower-order solutions, and these are solved in a hierarchical manner to obtain the O(epsilon) and O(epsilon(2)) corrections to the master potential. A similar hierarchical procedure is used for the Carrier-Maslen model for the end-cap secondary flow. The results of the Onsager hierarchy, up to O(epsilon(2)), are compared with the results of direct simulation Monte Carlo simulations for a binary hard-sphere gas mixture for secondary flow due to a wall temperature gradient, inflow/outflow of gas along the axis, as well as mass and momentum sources in the flow. There is excellent agreement between the solutions for the secondary flow correct to O(epsilon(2)) and the simulations, to within 15 %, even at a Reynolds number as low as 100, and length/diameter ratio as low as 2, for a low stratification parameter A of 0.707, and when the secondary flow velocity is as high as 0.2 times the maximum base flow velocity, and the ratio 2 Delta m/(m(1) + m(2)) is as high as 0.5. Here, the Reynolds number Re = rho(w)Omega R-2/mu, the stratification parameter A = root m Omega R-2(2)/(2k(B)T), R and Omega are the cylinder radius and angular velocity, m is the molecular mass, rho(w) is the wall density, mu is the viscosity and T is the temperature. The leading-order solutions do capture the qualitative trends, but are not in quantitative agreement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper based on the basic principles of gauge/gravity duality we compute the hall viscosity to entropy ratio in the presence of various higher derivative corrections to the dual gravitational description embedded in an asymptotically AdS(4) space time. As the first step of our analysis, considering the back reaction we impose higher derivative corrections to the abelian gauge sector of the theory where we notice that the ratio indeed gets corrected at the leading order in the coupling. Considering the probe limit as a special case we compute this leading order correction over the fixed background of the charged black brane solution. Finally we consider higher derivative (R-2) correction to the gravity sector of the theory where we notice that the above ratio might get corrected at the sixth derivative level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The uplift resistance of pipelines buried in sands, in the presence of inclined groundwater flow, considering both upward and downward flow directions, has been determined by using the lower bound finite elements limit analysis in conjunction with nonlinear optimization. A correction factor (f (gamma) ), which needs to be multiplied with the uplift factor (F (gamma) ), has been computed to account for groundwater seepage. The variation of f (gamma) has been obtained as a function of i(gamma (w) /gamma (sub) ) for different horizontal inclinations (theta) of groundwater flow; where i = absolute magnitude of hydraulic gradient along the direction of flow, gamma (w) is the unit weight of water and gamma (sub) is the submerged unit weight of soil mass. For a given magnitude of i, there exists a certain critical value of theta for which the magnitude of f (gamma) becomes the minimum. An example has also been presented to illustrate the application of the results obtained for designing pipelines in presence of groundwater seepage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compute the logarithmic correction to black hole entropy about exponentially suppressed saddle points of the Quantum Entropy Function corresponding to Z(N) orbifolds of the near horizon geometry of the extremal black hole under study. By carefully accounting for zero mode contributions we show that the logarithmic contributions for quarter-BPS black holes in N = 4 supergravity and one-eighth BPS black holes in N = 8 supergravity perfectly match with the prediction from the microstate counting. We also find that the logarithmic contribution for half-BPS black holes in N = 2 supergravity depends non-trivially on the Z(N) orbifold. Our analysis draws heavily on the results we had previously obtained for heat kernel coefficients on Z(N) orbifolds of spheres and hyperboloids in arXiv:1311.6286 and we also propose a generalization of the Plancherel formula to Z(N) orbifolds of hyperboloids to an expression involving the Harish-Chandra character of sl (2, R), a result which is of possible mathematical interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion couple experiments are conducted to study phase evolutions in the Co-rich part of the Co-Ni-Ta phase diagram. This helps to examine the available phase diagram and propose a correction on the stability of the Co2Ta phase based on the compositional measurements and X-ray analysis. The growth rate of this phase decreases with an increase in Ni content. The same is reflected on the estimated integrated interdiffusion coefficients of the components in this phase. The possible reasons for this change are discussed based on the discussions of defects, crystal structure and the driving forces for diffusion. Diffusion rate of Co in the Co2Ta phase at the Co-rich composition is higher because of more number of Co-Co bonds present compared to that of Ta-Ta bonds and the presence of Co antisites for the deviation from the stoichiometry. The decrease in the diffusion coefficients because of Ni addition indicates that Ni preferably replaces Co antisites to decrease the diffusion rate. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accuracy in tree woody growth estimates is important to global carbon budget estimation and climate-change science. Tree growth in permanent sampling plots (PSPs) is commonly estimated by measuring stem diameter changes, but this method is susceptible to bias resulting from water-induced reversible stem shrinkage. In the absence of bias correction, temporal variability in growth is likely to be overestimated and incorrectly attributed to fluctuations in resource availability, especially in forests with high seasonal and inter-annual variability in water. We propose and test a novel approach for estimating and correcting this bias at the community level. In a 50-ha PSP from a seasonally dry tropical forest in southern India, where tape measurements have been taken every four years from 1988 to 2012, for nine trees we estimated bias due to reversible stem shrinkage as the difference between woody growth measured using tree rings and that estimated from tape. We tested if the bias estimated from these trees could be used as a proxy to correct bias in tape-based growth estimates at the PSP scale. We observed significant shrinkage-related bias in the growth estimates of the nine trees in some censuses. This bias was strongly linearly related to tape-based growth estimates at the level of the PSP, and could be used as a proxy. After bias was corrected, the temporal variance in growth rates of the PSP decreased, while the effect of exceptionally dry or wet periods was retained, indicating that at least a part of the temporal variability arose from reversible shrinkage-related bias. We also suggest that the efficacy of the bias correction could be improved by measuring the proxy on trees that belong to different size classes and census timing, but not necessarily to different species. Our approach allows for reanalysis - and possible reinterpretation of temporal trends in tree growth, above ground biomass change, or carbon fluxes in forests, and their relationships with resource availability in the context of climate change. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. Methods: The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at the inhomogeneities. Jacob's ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. Results: The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. Conclusions: The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results. (C) 2015 American Association of Physicists in Medicine.