191 resultados para Numerical Computations

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the architecture of a multiprocessor system which we call the Broadcast Cube System (BCS) for solving important computation intensive problems such as systems of linear algebraic equations and Partial Differential Equations (PDEs), and highlights its features. Further, this paper presents an analytical performance study of the BCS, and it describes the main details of the design and implementation of the simulator for the BCS.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The design of present generation uncooled Hg1-xCdxTe infrared photon detectors relies on complex heterostructures with a basic unit cell of type (n) under bar (+)/pi/(p) under bar (+). We present an analysis of double barrier (n) under bar (+)/pi/(p) under bar (+) mid wave infrared (x = 0.3) HgCdTe detector for near room temperature operation using numerical computations. The present work proposes an accurate and generalized methodology in terms of the device design, material properties, and operation temperature to study the effects of position dependence of carrier concentration, electrostatic potential, and generation-recombination (g-r) rates on detector performance. Position dependent profiles of electrostatic potential, carrier concentration, and g-r rates were simulated numerically. Performance of detector was studied as function of doping concentration of absorber and contact layers, width of both layers and minority carrier lifetime. Responsivity similar to 0.38 A W-1, noise current similar to 6 x 10(-14) A/Hz(1/2) and D* similar to 3.1 x 10(10)cm Hz(1/2) W-1 at 0.1 V reverse bias have been calculated using optimized values of doping concentration, absorber width and carrier lifetime. The suitability of the method has been illustrated by demonstrating the feasibility of achieving the optimum device performance by carefully selecting the device design and other parameters. (C) 2010 American Institute of Physics. doi:10.1063/1.3463379]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper the kinematics of a curved shock of arbitrary strength has been discussed using the theory of generalised functions. This is the extension of Moslov’s work where he has considered isentropic flow even across the shock. The condition for a nontrivial jump in the flow variables gives the shock manifold equation (sme). An equation for the rate of change of shock strength along the shock rays (defined as the characteristics of the sme) has been obtained. This exact result is then compared with the approximate result of shock dynamics derived by Whitham. The comparison shows that the approximate equations of shock dynamics deviate considerably from the exact equations derived here. In the last section we have derived the conservation form of our shock dynamic equations. These conservation forms would be very useful in numerical computations as it would allow us to derive difference schemes for which it would not be necessary to fit the shock-shock explicitly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A finite circular cylindrical shell subjected to a band of uniform pressure on its outer rim was investigated, using three-dimensional elasticity theory and the classical shell theories of Timoshenko (or Donnell) and Flügge. Detailed comparison of the resulting stresses and displacements was carried out for shells with ratios of inner to outer shell radii equal to 0.80, 0.85, 0.90 and 0.93 and for ratios of outer shell diameter to length of the shell equal to 0.5, 1 and 2. The ratio of band width to length of the shell was 0.2 and Poisson's ratio used was equal to 0.3. An Elliot 803 digital computer was used for numerical computations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The laminar boundary layer over a stationary infinite disk induced by a rotating compressible fluid is considered. The free stream velocity has been taken as tangential and varies as a power of radius, i.e. v∞ ˜ r−n. The effect of the axial magnetic field and suction is also included in the analysis. An implicit finite difference scheme is employed to the governing similarity equations for numerical computations. Solutions are studied for various values of disk to fluid temperature ratio and for values of n between 1 and −1. In the absence of the magnetic field and suction, velocity profiles exhibit oscillations. It has been observed that for a hot disk in the presence of a magnetic field the boundary layer solutions decay algebraically instead of decaying exponentially. In the absence of the magnetic field and suction, the solution of the similarity equations exists only for a certain range of n.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for intertask synchronization. For example, OpenMP 4.0 will integrate data-driven execution semantics derived from the StarSs research language. Compared to the more restrictive data-parallel and fork-join concurrency models, the advanced features being introduced into task-parallelmodels in turn enable improved scalability through load balancing, memory latency hiding, mitigation of the pressure on memory bandwidth, and, as a side effect, reduced power consumption. In this article, we develop a systematic approach to compile loop nests into concurrent, dynamically constructed graphs of dependent tasks. We propose a simple and effective heuristic that selects the most profitable parallelization idiom for every dependence type and communication pattern. This heuristic enables the extraction of interband parallelism (cross-barrier parallelism) in a number of numerical computations that range from linear algebra to structured grids and image processing. The proposed static analysis and code generation alleviates the burden of a full-blown dependence resolver to track the readiness of tasks at runtime. We evaluate our approach and algorithms in the PPCG compiler, targeting OpenStream, a representative dataflow task-parallel language with explicit intertask dependences and a lightweight runtime. Experimental results demonstrate the effectiveness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the mechanics of tubular hydroforming under various types of loading conditions is investigated. The main objective is to contrast the effects of prescribing fluid pressure or volume flow rate, in conjunction with axial displacement, on the stress and strain histories experienced by the tube and the process of bulging. To this end, axisymmetric finite element simulations of free hydroforming (without external die contact) of aluminium alloy tubes are carried out. Hill’s normally anisotropic yield theory along with material properties determined in a previous experimental study [A. Kulkarni, P. Biswas, R. Narasimhan, A. Luo, T. Stoughton, R. Mishra, A.K. Sachdev, An experimental and numerical study of necking initiation in aluminium alloy tubes during hydroforming, Int. J. Mech. Sci. 46 (2004) 1727–1746] are employed in the computations. It is found that while prescribed fluid pressure leads to highly non-proportional strain paths, specified fluid volume flow rate may result in almost proportional ones for the predominant portion of loading. The peak pressure increases with axial compression for the former, while the reverse trend applies under the latter. The implication of these results on failure by localized necking of the tube wall is addressed in a subsequent investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Upwind-Least Squares Finite Difference (LSFD-U) scheme has been successfully applied for inviscid flow computations. In the present work, we extend the procedure for computing viscous flows. Different ways of discretizing the viscous fluxes are analysed for the positivity, which determines the robustness of the solution procedure. The scheme which is found to be more positive is employed for viscous flux computation. The numerical results for validating the procedure are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a numerical simulation of the well-documented, fluid-controlled Kabbal and Ponmudi type gneiss-chamockite transformations in southern India using a free energy minimization method. The computations have considered all the major solid phases and important fluid species in the rock - C-O-H and rock - C-O-H-N systems. Appropriate activity-composition relations for the solid solutions and equations of state for the fluids have been included in order to evaluate the mineral-fluid equilibria attending the incipient chamockite development in the gneisses. The C-O-H fluid speciation pattern in both the Kabbal and Ponmudi type systems indicates that CO2 and H2O make up the bulk of the fluid phase with CO, CH4, H-2 and O2 as minor constituents. In the graphite-buffered Ponmudi-system, the abundance of CO, CH4 and H-2 is orders of magnitude higher than that in the graphite-free Kabbal system. Simulation with C-O-H-N fluids of varying composition demonstrates the complementary role of CO2 and N2 as rather inert dilutants of H2O in the fluid phase. The simulation, carried out on available whole-rock data, has demonstrated the dependence of the transformation X(H2O) on P,T, and phase and chemical composition of the precursor gneiss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An operator-splitting finite element method for solving high-dimensional parabolic equations is presented. The stability and the error estimates are derived for the proposed numerical scheme. Furthermore, two variants of fully-practical operator-splitting finite element algorithms based on the quadrature points and the nodal points, respectively, are presented. Both the quadrature and the nodal point based operator-splitting algorithms are validated using a three-dimensional (3D) test problem. The numerical results obtained with the full 3D computations and the operator-split 2D + 1D computations are found to be in a good agreement with the analytical solution. Further, the optimal order of convergence is obtained in both variants of the operator-splitting algorithms. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive Mesh Refinement is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various timesteps results in a better study of the underlying domain and the model used for simulating the domain. In this paper, we develop strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the timesteps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Friction has an important influence in metal forming operations, as it contributes to the success or otherwise of the process. In the present investigation, the effect of friction on metal forming was studied by simulating compression tests on cylindrical Al-Mg alloy using the finite element method (FEM) technique. Three kinds of compression tests were considered wherein a constant coefficient of friction was employed at the upper die-work-piece interface. However, the coefficient of friction between the lower die-work-piece interfaces was varied in the tests. The simulation results showed that a difference in metal flow occurs near the interfaces owing to the differences in the coefficient of friction. It was concluded that the variations in the coefficient of friction between the dies and the work-piece directly affect the stress distribution and shape of the work-piece, having implications on the microstructure of the material being processed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.