75 resultados para diet formulation criteria


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a minimum weight design of carbon/epoxy laminates is carried out using genetic algorithms. New failure envelopes have been developed by the combination of two commonly used phenomenological failure criteria, namely Maximum Stress (MS) and Tsai-Wu (TW) are used to obtain the minimum weight of the laminate. These failure envelopes are the most conservative failure envelope (MCFE) and the least conservative failure envelope (LCFE). Uniaxial and biaxial loading conditions are considered for the study and the differences in the optimal weight of the laminate are compared for the MCFE and LCFE. The MCFE can be used for design of critical load-carrying composites, while the LCFE could be used for the design of composite structures where weight reduction is much more important than safety such as unmanned air vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On a characteristic surface Omega of a hyperbolic system of first-order equations in multi-dimensions (x, t), there exits a compatibility condition which is in the form of a transport equation along a bicharacteristic on Omega. This result can be interpreted also as a transport equation along rays of the wavefront Omega(t) in x-space associated with Omega. For a system of quasi-linear equations, the ray equations (which has two distinct parts) and the transport equation form a coupled system of underdetermined equations. As an example of this bicharacteristic formulation, we consider two-dimensional unsteady flow of an ideal magnetohydrodynamics gas with a plane aligned magnetic field. For any mode of propagation in this two-dimensional flow, there are three ray equations: two for the spatial coordinates x and y and one for the ray diffraction. In spite of little longer calculations, the final four equations (three ray equations and one transport equation) for the fast magneto-acoustic wave are simple and elegant and cannot be derived in these simple forms by use of a computer program like REDUCE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the direct position kinematics problem of a general 6-6 Stewart platform, the complete solution of which is not reported in the literature until now and even establishing the number of possible solutions for the general case has remained an unsolved problem for a long period. Here a canonical formulation of the direct position kinematics problem for a general 6-6 Stewart platform is presented. The kinematic equations are expressed as a system of six quadratic and three linear equations in nine unknowns, which has a maximum of 64 solutions. Thus, it is established that the mechanism, in general, can have up to 64 closures. Further reduction of the system is shown arriving at a set of three quartic equations in three unknowns, the solution of which will yield the assembly configurations of the general Stewart platform with far less computational effort compared to earlier models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We review work initiated and inspired by Sudarshan in relativistic dynamics, beam optics, partial coherence theory, Wigner distribution methods, multimode quantum optical squeezing, and geometric phases. The 1963 No Interaction Theorem using Dirac's instant form and particle World Line Conditions is recalled. Later attempts to overcome this result exploiting constrained Hamiltonian theory, reformulation of the World Line Conditions and extending Dirac's formalism, are reviewed. Dirac's front form leads to a formulation of Fourier Optics for the Maxwell field, determining the actions of First Order Systems (corresponding to matrices of Sp(2,R) and Sp(4,R)) on polarization in a consistent manner. These groups also help characterize properties and propagation of partially coherent Gaussian Schell Model beams, leading to invariant quality parameters and the new Twist phase. The higher dimensional groups Sp(2n,R) appear in the theory of Wigner distributions and in quantum optics. Elegant criteria for a Gaussian phase space function to be a Wigner distribution, expressions for multimode uncertainty principles and squeezing are described. In geometric phase theory we highlight the use of invariance properties that lead to a kinematical formulation and the important role of Bargmann invariants. Special features of these phases arising from unitary Lie group representations, and a new formulation based on the idea of Null Phase Curves, are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The enthalpy method is primarily developed for studying phase change in a multicomponent material, characterized by a continuous liquid volume fraction (phi(1)) vs temperature (T) relationship. Using the Galerkin finite element method we obtain solutions to the enthalpy formulation for phase change in 1D slabs of pure material, by assuming a superficial phase change region (linear (phi(1) vs T) around the discontinuity at the melting point. Errors between the computed and analytical solutions are evaluated for the fluxes at, and positions of, the freezing front, for different widths of the superficial phase change region and spatial discretizations with linear and quadratic basis functions. For Stefan number (St) varying between 0.1 and 10 the method is relatively insensitive to spatial discretization and widths of the superficial phase change region. Greater sensitivity is observed at St = 0.01, where the variation in the enthalpy is large. In general the width of the superficial phase change region should span at least 2-3 Gauss quadrature points for the enthalpy to be computed accurately. The method is applied to study conventional melting of slabs of frozen brine and ice. Regardless of the forms for the phi(1) vs T relationships, the thawing times were found to scale as the square of the slab thickness. The ability of the method to efficiently capture multiple thawing fronts which may originate at any spatial location within the sample, is illustrated with the microwave thawing of slabs and 2D cylinders. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a model of the solar dynamo in which, on the one hand, we follow the Babcock-Leighton approach to include surface processes, such as the production of poloidal field from the decay of active regions, and, on the other hand, we attempt to develop a mean field theory that can be studied in quantitative detail. One of the main challenges in developing such models is to treat the buoyant rise of the toroidal field and the production of poloidal field from it near the surface. A previous paper by Choudhuri, Schüssler, & Dikpati in 1995 did not incorporate buoyancy. We extend this model by two contrasting methods. In one method, we incorporate the generation of the poloidal field near the solar surface by Durney's procedure of double-ring eruption. In the second method, the poloidal field generation is treated by a positive α-effect concentrated near the solar surface coupled with an algorithm for handling buoyancy. The two methods are found to give qualitatively similar results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experiments on reverse transition were conducted in two-dimensional accelerated incompressible turbulent boundary layers. Mean velocity profiles, longitudinal velocity fluctuations $\tilde{u}^{\prime}(=(\overline{u^{\prime 2}})^{\frac{1}{2}})$ and the wall-shearing stress (TW) were measured. The mean velocity profiles show that the wall region adjusts itself to laminar conditions earlier than the outer region. During the reverse transition process, increases in the shape parameter (H) are accompanied by a decrease in the skin friction coefficient (Cf). Profiles of turbulent intensity (u’2) exhibit near similarity in the turbulence decay region. The breakdown of the law of the wall is characterized by the parameter \[ \Delta_p (=\nu[dP/dx]/\rho U^{*3}) = - 0.02, \] where U* is the friction velocity. Downstream of this region the decay of $\tilde{u}^{\prime}$ fluctuations occurred when the momentum thickness Reynolds number (R) decreased roughly below 400.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the successful performance of a granular filter medium, existing design guidelines, which are based on the particle size distribution (PSD) characteristics of the base soil and filter medium, require two contradictory conditions to be satisfied, viz., soil retention and permeability. In spite of the wider applicability of these guidelines, it is well recognized that (i) they are applicable to a particular range of soils tested in the laboratory, (ii) the design procedures do not include performance-based selection criteria, and (iii) there are no means to establish the sensitivity of the important variables influencing performance. In the present work, analytical solutions are developed to obtain a factor of safety with respect to soil-retention and permeability criteria for a base soil - filter medium system subjected to a soil boiling condition. The proposed analytical solutions take into consideration relevant geotechnical properties such as void ratio, permeability, dry unit weight, effective friction angle, shape and size of soil particles, seepage discharge, and existing hydraulic gradient. The solution is validated through example applications and experimental results, and it is established that it can be used successfully in the selection as well as design of granular filters and can be applied to all types of base soils.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions.We use the force and moment transformation matrices separately,and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation has been applied to a class of Stewart platform manipulators. We obtain multi-parameter families of isotropic manipulator analytically. In addition to computing the isotropic configurations of an existing manipulator,we demonstrate a procedure for designing the manipulator for isotropy at a given configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.