944 resultados para Jacobian-free approach
Resumo:
In this paper, the free vibration of a non-uniform free-free Euler-Bernoulli beam is studied using an inverse problem approach. It is found that the fourth-order governing differential equation for such beams possess a fundamental closed-form solution for certain polynomial variations of the mass and stiffness. An infinite number of non-uniform free-free beams exist, with different mass and stiffness variations, but sharing the same fundamental frequency. A detailed study is conducted for linear, quadratic and cubic variations of mass, and on how to pre-select the internal nodes such that the closed-form solutions exist for the three cases. A special case is also considered where, at the internal nodes, external elastic constraints are present. The derived results are provided as benchmark solutions for the validation of non-uniform free-free beam numerical codes. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, the governing equations for free vibration of a non-homogeneous rotating Timoshenko beam, having uniform cross-section, is studied using an inverse problem approach, for both cantilever and pinned-free boundary conditions. The bending displacement and the rotation due to bending are assumed to be simple polynomials which satisfy all four boundary conditions. It is found that for certain polynomial variations of the material mass density, elastic modulus and shear modulus, along the length of the beam, the assumed polynomials serve as simple closed form solutions to the coupled second order governing differential equations with variable coefficients. It is found that there are an infinite number of analytical polynomial functions possible for material mass density, shear modulus and elastic modulus distributions, which share the same frequency and mode shape for a particular mode. The derived results are intended to serve as benchmark solutions for testing approximate or numerical methods used for the vibration analysis of rotating non-homogeneous Timoshenko beams.
Resumo:
In Incompressible Smooth Particle Hydrodynamics (ISPH), a pressure Poisson equation (PPE) is solved to obtain a divergence free velocity field. When free surfaces are simulated using this method a Dirichlet boundary condition for pressure at the free surface has to be applied. In existing ISPH methods this is achieved by identifying free surface particles using heuristically chosen threshold of a parameter such as kernel sum, density or divergence of the position, and explicitly setting their pressure values. This often leads to clumping of particles near the free surface and spraying off of surface particles during splashes. Moreover, surface pressure gradients in flows where surface tension is important are not captured well using this approach. We propose a more accurate semi-analytical approach to impose Dirichlet boundary conditions on the free surface. We show the efficacy of the proposed algorithm by using test cases of elongation of a droplet and dam break. We perform two dimensional simulations of water entry and validate the proposed algorithm with experimental results. Further, a three dimensional simulation of droplet splash is shown to compare well with the Volume-of-Fluid simulations. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as `coalescence' and `scrambling'. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with a new approach to study the nonlinear inviscid flow over arbitrary bottom topography. The problem is formulated as a nonlinear boundary value problem which is reduced to a Dirichlet problem using certain transformations. The Dirichlet problem is solved by applying Plemelj-Sokhotski formulae and it is noticed that the solution of the Dirichlet problem depends on the solution of a coupled Fredholm integral equation of the second kind. These integral equations are solved numerically by using a modified method. The free-surface profile which is unknown at the outset is determined. Different kinds of bottom topographies are considered here to study the influence of bottom topography on the free-surface profile. The effects of the Froude number and the arbitrary bottom topography on the free-surface profile are demonstrated in graphical forms for the subcritical flow. Further, the nonlinear results are validated with the results available in the literature and compared with the results obtained by using linear theory. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
In Paper I (Hu, 1982), we discussed the the influence of fluctuation fields on the force-free field for the case of conventional turbulence and demonstrated the general relationships. In the present paper, by using the approach of local expansion, the equation of average force-free field is obtained as (1+b)×B 0=(#x002B;a)B 0#x002B;a (1)×B 0#x002B;K. The average coefficientsa,a (1),b, andK show the influence of the fluctuation fields in small scale on the configurations of magnetic field in large scale. As the average magnetic field is no longer parallel to the average electric current, the average configurations of force-free fields are more general and complex than the usual ones. From the view point of physics, the energy and momentum of the turbulent structures should have influence on the equilibrium of the average fields. Several examples are discussed, and they show the basic features of the fluctuation fields and the influence of fluctuation fields on the average configurations of magnetic fields. The astrophysical environments are often in the turbulent state, the results of the present paper may be applied to the turbulent plasma where the magnetic field is strong.
Resumo:
We study international environmental negotiations when agreements between countries can not be binding. A problem with this kind of negotiations is that countries have incentives for free-riding from such agreements. We develope a notion of equilibrium based on the assumption that countries can create and dissolve agreements in their seeking of a larger welfare. This approach leads to a larger degree of cooperation compared to models based on the internal-external stability approach.
Resumo:
In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.
Resumo:
In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.
Resumo:
Flies are particularly adept at balancing the competing demands of delay tolerance, performance, and robustness during flight, which invites thoughtful examination of their multimodal feedback architecture. This dissertation examines stabilization requirements for inner-loop feedback strategies in the flapping flight of Drosophila, the fruit fly, against the backdrop of sensorimotor transformations present in the animal. Flies have evolved multiple specializations to reduce sensorimotor latency, but sensory delay during flight is still significant on the timescale of body dynamics. I explored the effect of sensor delay on flight stability and performance for yaw turns using a dynamically-scaled robot equipped with a real-time feedback system that performed active turns in response to measured yaw torque. The results show a fundamental tradeoff between sensor delay and permissible feedback gain, and suggest that fast mechanosensory feedback provides a source of active damping that compliments that contributed by passive effects. Presented in the context of these findings, a control architecture whereby a haltere-mediated inner-loop proportional controller provides damping for slower visually-mediated feedback is consistent with tethered-flight measurements, free-flight observations, and engineering design principles. Additionally, I investigated how flies adjust stroke features to regulate and stabilize level forward flight. The results suggest that few changes to hovering kinematics are actually required to meet steady-state lift and thrust requirements at different flight speeds, and the primary driver of equilibrium velocity is the aerodynamic pitch moment. This finding is consistent with prior hypotheses and observations regarding the relationship between body pitch and flight speed in fruit flies. The results also show that the dynamics may be stabilized with additional pitch damping, but the magnitude of required damping increases with flight speed. I posit that differences in stroke deviation between the upstroke and downstroke might play a critical role in this stabilization. Fast mechanosensory feedback of the pitch rate could enable active damping, which would inherently exhibit gain scheduling with flight speed if pitch torque is regulated by adjusting stroke deviation. Such a control scheme would provide an elegant solution for flight stabilization across a wide range of flight speeds.
Resumo:
A mathematical model is proposed in this thesis for the control mechanism of free fatty acid-glucose metabolism in healthy individuals under resting conditions. The objective is to explain in a consistent manner some clinical laboratory observations such as glucose, insulin and free fatty acid responses to intravenous injection of glucose, insulin, etc. Responses up to only about two hours from the beginning of infusion are considered. The model is an extension of the one for glucose homeostasis proposed by Charette, Kadish and Sridhar (Modeling and Control Aspects of Glucose Homeostasis. Mathematical Biosciences, 1969). It is based upon a systems approach and agrees with the current theories of glucose and free fatty acid metabolism. The description is in terms of ordinary differential equations. Validation of the model is based on clinical laboratory data available at the present time. Finally procedures are suggested for systematically identifying the parameters associated with the free fatty acid portion of the model.
Resumo:
We present a new efficient numerical approach for representing anisotropic physical quantities and/or matrix elements defined on the Fermi surface (FS) of metallic materials. The method introduces a set of numerically calculated generalized orthonormal functions which are the solutions of the Helmholtz equation defined on the FS. Noteworthy, many properties of our proposed basis set are also shared by the FS harmonics introduced by Philip B Allen (1976 Phys. Rev. B 13 1416), proposed to be constructed as polynomials of the cartesian components of the electronic velocity. The main motivation of both approaches is identical, to handle anisotropic problems efficiently. However, in our approach the basis set is defined as the eigenfunctions of a differential operator and several desirable properties are introduced by construction. The method is demonstrated to be very robust in handling problems with any crystal structure or topology of the FS, and the periodicity of the reciprocal space is treated as a boundary condition for our Helmholtz equation. We illustrate the method by analysing the free-electron-like lithium (Li), sodium (Na), copper (Cu), lead (Pb), tungsten (W) and magnesium diboride (MgB2)
Resumo:
Many modern stock assessment methods provide the machinery for determining the status of a stock in relation to certain reference points and for estimating how quickly a stock can be rebuilt. However, these methods typically require catch data, which are not always available. We introduce a model-based framework for estimating reference points, stock status, and recovery times in situations where catch data and other measures of absolute abundance are unavailable. The specif ic estimator developed is essentially an age-structured production model recast in terms relative to pre-exploitation levels. A Bayesian estimation scheme is adopted to allow the incorporation of pertinent auxiliary information such as might be obtained from meta-analyses of similar stocks or anecdotal observations. The approach is applied to the population of goliath grouper (Epinephelus itajara) off southern Florida, for which there are three indices of relative abundance but no reliable catch data. The results confirm anecdotal accounts of a marked decline in abundance during the 1980s followed by a substantial increase after the harvest of goliath grouper was banned in 1990. The ban appears to have reduced fishing pressure to between 10% and 50% of the levels observed during the 1980s. Nevertheless, the predicted fishing mortality rate under the ban appears to remain substantial, perhaps owing to illegal harvest and depth-related release mortality. As a result, the base model predicts that there is less than a 40% chance that the spawning biomass will recover to a level that would produce a 50% spawning potential ratio.
Resumo:
The application of automated design optimization to real-world, complex geometry problems is a significant challenge - especially if the topology is not known a priori like in turbine internal cooling. The long term goal of our work is to focus on an end-to-end integration of the whole CFD Process, from solid model through meshing, solving and post-processing to enable this type of design optimization to become viable & practical. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut- Cartesian mesh generator, RANS flow solver, post-processing & geometry editing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh guided by the underpinning Level Set. This paper extends this work still further with a simple scoping study showing how the basic functionality can be scripted & automated and then used as the basis for automated optimization of a generic gas turbine cooling geometry. Copyright © 2008 by W.N.Dawes.