38 resultados para other numerical approaches

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nonlinear singular integral equation of transonic flow is examined, noting that standard numerical techniques are not applicable in solving it. The difficulties in approximating the integral term in this expression were solved by special methods mitigating the inaccuracies caused by standard approximations. It was shown how the infinite domain of integration can be reduced to a finite one; numerical results were plotted demonstrating that the methods proposed here improve accuracy and computational economy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hyper-redundant robots are characterized by the presence of a large number of actuated joints, many more than the number required to perform a given task. These robots have been proposed and used for many applications involving avoiding obstacles or, in general, to provide enhanced dexterity in performing tasks. Making effective use of the extra degrees of freedom or resolution of redundancy has been an extensive topic of research and several methods have been proposed in literature. In this paper, we compare three known methods and show that an algorithm based on a classical curve called the tractrix leads to a more 'natural' motion of the hyper-redundant robot, with the displacements diminishing from the end-effector to the fixed base. In addition, since the actuators nearer the base 'see' a greater inertia due to the links farther away, smaller motion of the actuators nearer the base results in better motion of the end-effector as compared to other two approaches. We present simulation and experimental results performed on a prototype eight link planar hyper-redundant manipulator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Even research models of helicopter dynamics often lead to a large number of equations of motion with periodic coefficients; and Floquet theory is a widely used mathematical tool for dynamic analysis. Presently, three approaches are used in generating the equations of motion. These are (1) general-purpose symbolic processors such as REDUCE and MACSYMA, (2) a special-purpose symbolic processor, DEHIM (Dynamic Equations for Helicopter Interpretive Models), and (3) completely numerical approaches. In this paper, comparative aspects of the first two purely algebraic approaches are studied by applying REDUCE and DEHIM to the same set of problems. These problems range from a linear model with one degree of freedom to a mildly non-linear multi-bladed rotor model with several degrees of freedom. Further, computational issues in applying Floquet theory are also studied, which refer to (1) the equilibrium solution for periodic forced response together with the transition matrix for perturbations about that response and (2) a small number of eigenvalues and eigenvectors of the unsymmetric transition matrix. The study showed the following: (1) compared to REDUCE, DEHIM is far more portable and economical, but it is also less user-friendly, particularly during learning phases; (2) the problems of finding the periodic response and eigenvalues are well conditioned.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the upper bound for the central magnetic field of a super-Chandrasekhar white dwarf calculated by Nityananda and Konar Phys. Rev. D 89, 103017 (2014)] and in the concerned comment, by the same authors, against our work U. Das and B. Mukhopadhyay, Phys. Rev. D 86, 042001 (2012)] is erroneous. This in turn strengthens the argument in favor of the stability of the recently proposed magnetized super-Chandrasekhar white dwarfs. We also point out several other numerical errors in their work. Overall we conclude that the arguments put forth by Nityananda and Konar are misleading.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The static response of thin, wrinkled membranes is studied using both a tension field approximation based on plane stress conditions and a 3D nonlinear elasticityformulation, discretized through 8-noded Cosserat point elements. While the tension field approach only obtains the wrinkled/slack regions and at best a measure of the extent of wrinkliness, the 3D elasticity solution provides, in principle, the deformed shape of a wrinkled/slack membrane. However, since membranes barely resist compression, the discretized and linearized system equations via both the approaches are ill-conditioned and solutions could thus be sensitive to discretizations errors as well as other sources of noises/imperfections. We propose a regularized, pseudo-dynamical recursion scheme that provides a sequence of updates, which are almost insensitive to theregularizing term as well as the time step size used for integrating the pseudo-dynamical form. This is borne out through several numerical examples wherein the relative performance of the proposed recursion scheme vis-a-vis a regularized Newton strategy is compared. The pseudo-time marching strategy, when implemented using 3D Cosserat point elements, also provides a computationally cheaper, numerically accurate and simpler alternative to that using geometrically exact shell theories for computing large deformations of membranes in the presence of wrinkles. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A strong-coupling expansion for the Green's functions, self-energies, and correlation functions of the Bose-Hubbard model is developed. We illustrate the general formalism, which includes all possible (normal-phase) inhomogeneous effects in the formalism, such as disorder or a trap potential, as well as effects of thermal excitations. The expansion is then employed to calculate the momentum distribution of the bosons in the Mott phase for an infinite homogeneous periodic system at zero temperature through third order in the hopping. By using scaling theory for the critical behavior at zero momentum and at the critical value of the hopping for the Mott insulator–to–superfluid transition along with a generalization of the random-phase-approximation-like form for the momentum distribution, we are able to extrapolate the series to infinite order and produce very accurate quantitative results for the momentum distribution in a simple functional form for one, two, and three dimensions. The accuracy is better in higher dimensions and is on the order of a few percent relative error everywhere except close to the critical value of the hopping divided by the on-site repulsion. In addition, we find simple phenomenological expressions for the Mott-phase lobes in two and three dimensions which are much more accurate than the truncated strong-coupling expansions and any other analytic approximation we are aware of. The strong-coupling expansions and scaling-theory results are benchmarked against numerically exact quantum Monte Carlo simulations in two and three dimensions and against density-matrix renormalization-group calculations in one dimension. These analytic expressions will be useful for quick comparison of experimental results to theory and in many cases can bypass the need for expensive numerical simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical study on columnar-to-equiaxed transition (CET) during directional solidification of binary alloys is presented using a macroscopic solidification model. The position of CET is predicted numerically using a critical cooling rate criterion reported in literature. The macroscopic solidification model takes into account movement of solid phase due to buoyancy, and drag effect on the moving solid phase because of fluid motion. The model is applied to simulate the solidification process for binary alloys (Sn-Pb) and to estimate solidification parameters such as position of the liquidus, velocity of the liquidus isotherm, temperature gradient ahead of the liquidus, and cooling rate at the liquidus. Solidification phenomena under two cooling configurations are studied: one without melt convection and the other involvin thermosolutal convection. The numerically predicted positions of CET compare well with those of experiments reported in literature. Melt convection results in higher cooling rate, higher liquidus isotherm velocities, and stimulation of occurrence of CET in comparison to the nonconvecting case. The movement of solid phase aids further the process of CET. With a fixed solid phase, the occurrence of CET based on the same critical cooling rate is delayed and it occurs at a greater distance from the chill.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The finite-difference form of the basic conservation equations in laminar film boiling have been solved by the false-transient method. By a judicious choice of the coordinate system the vapour-liquid interface is fitted to the grid system. Central differencing is used for diffusion terms, upwind differencing for convection terms, and explicit differencing for transient terms. Since an explicit method is used the time step used in the false-transient method is constrained by numerical instability. In the present problem the limits on the time step are imposed by conditions in the vapour region. On the other hand the rate of convergence of finite-difference equations is dependent on the conditions in the liquid region. The rate of convergence was accelerated by using the over-relaxation technique in the liquid region. The results obtained compare well with previous work and experimental data available in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic systems involving convolution integrals with decaying kernels, of which fractionally damped systems form a special case, are non-local in time and hence infinite dimensional. Straightforward numerical solution of such systems up to time t needs O(t(2)) computations owing to the repeated evaluation of integrals over intervals that grow like t. Finite-dimensional and local approximations are thus desirable. We present here an approximation method which first rewrites the evolution equation as a coupled in finite-dimensional system with no convolution, and then uses Galerkin approximation with finite elements to obtain linear, finite-dimensional, constant coefficient approximations for the convolution. This paper is a broad generalization, based on a new insight, of our prior work with fractional order derivatives (Singh & Chatterjee 2006 Nonlinear Dyn. 45, 183-206). In particular, the decaying kernels we can address are now generalized to the Laplace transforms of known functions; of these, the power law kernel of fractional order differentiation is a special case. The approximation can be refined easily. The local nature of the approximation allows numerical solution up to time t with O(t) computations. Examples with several different kernels show excellent performance. A key feature of our approach is that the dynamic system in which the convolution integral appears is itself approximated using another system, as distinct from numerically approximating just the solution for the given initial values; this allows non-standard uses of the approximation, e. g. in stability analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for total risk analysis of embankment dams under earthquake conditions is discussed and applied to the selected embankment dams, i.e., Chang, Tapar, Rudramata, and Kaswati located in the Kachchh region of Gujarat, India, to obtain the seismic hazard rating of the dam site and the risk rating of the structures. Based on the results of the total risk analysis of the dams, coupled non-linear dynamic numerical analyses of the dam sections are performed using acceleration time history record of the Bhuj (India) earthquake as well as five other major earthquakes recorded worldwide. The objective of doing so is to perform the numerical analysis of the dams for the range of amplitude, frequency content and time duration of input motions. The deformations calculated from the numerical analyses are also compared with other approaches available in literature, viz, Makdisi and Seed (1978) approach, Jansen's approach (1990), Swaisgood's method (1995), Bureau's method (1997). Singh et al. approach (2007), and Saygili and Rathje approach (2008) and the results are utilized to foresee the stability of dams in future earthquake scenario. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability of an electrophoretically homogeneous rabbit penicillin carrier receptor protein (CRP) and rabbit antipenicillin antibody afforded an idealin vitro system to calculate the thermodynamic parameters of the binding of14C benzyl penicillin CRP conjugate (antigen) to the purified rabbit antipenicillin antibody. The thermodynamic parameters of this antigen-antibody reaction has been studied by radio-active assay method by using millipore filter. Equilibrium constant (K) of this reaction has been found to be 2·853×109M−2 and corresponding free energy (ΔG) at 4°C and 37°C has been calculated to be −12·02 and −13·5 kcal/mole, enthalpy (ΔH) and entropy (ΔS) has been found to be 361 kcal/mole and +30 eu/mole respectively. Competitive binding studies of CRP-analogue conjugates with the divalent rabbit antibody has been carried out in the presence of14C-penicilloyl CRP. It was found that 7-deoxy penicillin-CRP complex and 6-amino penicilloyl CRP conjugate binds to the antibody with energies stronger than that with the14C-penicilloyl CRP. All the other analogue conjugates are much weaker in interfering with the binding of the penicilloyl CRP with the antibody. The conjugate of methicillin,o-nitro benzyl penicillin and ticarcillin with CRP do not materially interfere in the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of present generation uncooled Hg1-xCdxTe infrared photon detectors relies on complex heterostructures with a basic unit cell of type (n) under bar (+)/pi/(p) under bar (+). We present an analysis of double barrier (n) under bar (+)/pi/(p) under bar (+) mid wave infrared (x = 0.3) HgCdTe detector for near room temperature operation using numerical computations. The present work proposes an accurate and generalized methodology in terms of the device design, material properties, and operation temperature to study the effects of position dependence of carrier concentration, electrostatic potential, and generation-recombination (g-r) rates on detector performance. Position dependent profiles of electrostatic potential, carrier concentration, and g-r rates were simulated numerically. Performance of detector was studied as function of doping concentration of absorber and contact layers, width of both layers and minority carrier lifetime. Responsivity similar to 0.38 A W-1, noise current similar to 6 x 10(-14) A/Hz(1/2) and D* similar to 3.1 x 10(10)cm Hz(1/2) W-1 at 0.1 V reverse bias have been calculated using optimized values of doping concentration, absorber width and carrier lifetime. The suitability of the method has been illustrated by demonstrating the feasibility of achieving the optimum device performance by carefully selecting the device design and other parameters. (C) 2010 American Institute of Physics. doi:10.1063/1.3463379]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coherent flame model uses the strain rate to predict reaction rate per unit flame surface area and some procedure that solves for the dynamics of flame surfaces to predict species distributions. The strainrate formula for the reaction rate is obtained from the analytical solution for a flame in a laminar, plane stagnation point flow. Here, the formula's effectiveness is examined by comparisons with data from a direct numerical simulation (DNS) of a round jetlike flow that undergoes transition to turbulence. Significant differences due to general flow features can be understood qualitatively: Model predictions are good in the braids between vortex rings, which are present in the near field of round jets, as the strain rate is extensional and reaction surfaces are isolated. In several other regions, the strain rate is compressive or flame surfaces are folded close together. There, the predictions are poor as the local flow no longer resembles the model flow. Quantitative comparisons showed some discrepancies. A modified, consistent application of the strain-rate solution did not show significant changes in the prediction of mean reaction rate distributions.