970 resultados para Classical methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pioneering work of Runge and Kutta a hundred years ago has ultimately led to suites of sophisticated numerical methods suitable for solving complex systems of deterministic ordinary differential equations. However, in many modelling situations, the appropriate representation is a stochastic differential equation and here numerical methods are much less sophisticated. In this paper a very general class of stochastic Runge-Kutta methods is presented and much more efficient classes of explicit methods than previous extant methods are constructed. In particular, a method of strong order 2 with a deterministic component based on the classical Runge-Kutta method is constructed and some numerical results are presented to demonstrate the efficacy of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transport processes within heterogeneous media may exhibit non-classical diffusion or dispersion; that is, not adequately described by the classical theory of Brownian motion and Fick's law. We consider a space fractional advection-dispersion equation based on a fractional Fick's law. The equation involves the Riemann-Liouville fractional derivative which arises from assuming that particles may make large jumps. Finite difference methods for solving this equation have been proposed by Meerschaert and Tadjeran. In the variable coefficient case, the product rule is first applied, and then the Riemann-Liouville fractional derivatives are discretised using standard and shifted Grunwald formulas, depending on the fractional order. In this work, we consider a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Grunwald formulas are used to discretise the fractional derivatives at control volume faces. We compare the two methods for several case studies from the literature, highlighting the convenience of the finite volume approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transport processes within heterogeneous media may exhibit non- classical diffusion or dispersion which is not adequately described by the classical theory of Brownian motion and Fick’s law. We consider a space-fractional advection-dispersion equation based on a fractional Fick’s law. Zhang et al. [Water Resources Research, 43(5)(2007)] considered such an equation with variable coefficients, which they dis- cretised using the finite difference method proposed by Meerschaert and Tadjeran [Journal of Computational and Applied Mathematics, 172(1):65-77 (2004)]. For this method the presence of variable coef- ficients necessitates applying the product rule before discretising the Riemann–Liouville fractional derivatives using standard and shifted Gru ̈nwald formulas, depending on the fractional order. As an alternative, we propose using a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Gru ̈nwald formulas are used to discretise the Riemann–Liouville fractional derivatives at control volume faces, eliminating the need for product rule expansions. We compare the two methods for several case studies, highlighting the convenience of the finite volume approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimisation is a fundamental step in the turbine design process, especially in the development of non-classical designs of radial-inflow turbines working with high-density fluids in low-temperature Organic Rankine Cycles (ORCs). The present work discusses the simultaneous optimisation of the thermodynamic cycle and the one-dimensional design of radial-inflow turbines. In particular, the work describes the integration between a 1D meanline preliminary design code adapted to real gases and the performance estimation approach for radial-inflow turbines in an established ORC cycle analysis procedure. The optimisation approach is split in two distinct loops; the inner operates on the 1D design based on the parameters received from the outer loop, which optimises the thermodynamic cycle. The method uses parameters including brine flow rate, temperature and working fluid, shifting assumptions such as head and flow coefficients into the optimisation routine. The discussed design and optimisation method is then validated against published benchmark cases. Finally, using the same conditions, the coupled optimisation procedure is extended to the preliminary design of a radial-inflow turbine with R143a as working fluid in realistic geothermal conditions and compared against results from commercially-available software RITAL from Concepts-NREC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The receptor for advanced glycation end products (RAGE) is a member of the immunoglobulin superfamily of cell surface receptor molecules. High concentrations of three of its putative proinflammatory ligands, S100A8/A9 complex (calprotectin), S100A8, and S100A12, are found in rheumatoid arthritis (RA) serum and synovial fluid. In contrast, soluble RAGE (sRAGE) may prevent proinflammatory effects by acting as a decoy. This study evaluated the serum levels of S100A9, S100A8, S100A12 and sRAGE in RA patients, to determine their relationship to inflammation and joint and vascular damage. Methods: Serum sRAGE, S100A9, S100A8 and S100A12 levels from 138 patients with established RA and 44 healthy controls were measured by ELISA and compared by unpaired t test. In RA patients, associations with disease activity and severity variables were analyzed by simple and multiple linear regressions. Results: Serum S100A9, S100A8 and S100A12 levels were correlated in RA patients. S100A9 levels were associated with body mass index (BMI), and with serum levels of S100A8 and S100A12. S100A8 levels were associated with serum levels of S100A9, presence of anti-citrullinated peptide antibodies (ACPA), and rheumatoid factor (RF). S100A12 levels were associated with presence of ACPA, history of diabetes, and serum S100A9 levels. sRAGE levels were negatively associated with serum levels of C-reactive protein (CRP) and high-density lipoprotein (HDL), history of vasculitis, and the presence of the RAGE 82Ser polymorphism. Conclusions: sRAGE and S100 proteins were associated not just with RA inflammation and autoantibody production, but also with classical vascular risk factors for end-organ damage. Consistent with its role as a RAGE decoy molecule, sRAGE had the opposite effects to S100 proteins in that S100 proteins were associated with autoantibodies and vascular risk, whereas sRAGE was associated with protection against joint and vascular damage. These data suggest that RAGE activity influences co-development of joint and vascular disease in rheumatoid arthritis patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel algebraic formulation of the central problem of screw theory, namely the determination of the principal screws of a given system. Using the algebra of dual numbers, it shows that the principal screws can be determined via the solution of a generalised eigenproblem of two real, symmetric matrices. This approach allows the study of the principal screws of the general two-, three-systems associated with a manipulator of arbitrary geometry in terms of closed-form expressions of its architecture and configuration parameters. We also present novel methods for the determination of the principal screws for four-, five-systems which do not require the explicit computation of the reciprocal systems. Principal screws of the systems of different orders are identified from one uniform criterion, namely that the pitches of the principal screws are the extreme values of the pitch.The classical results of screw theory, namely the equations for the cylindroid and the pitch-hyperboloid associated with the two-and three-systems, respectively have been derived within the proposed framework. Algebraic conditions have been derived for some of the special screw systems. The formulation is also illustrated with several examples including two spatial manipulators of serial and parallel architecture, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general analysis of the Hamilton-Jacobi form of dynamics motivated by phase space methods and classical transformation theory is presented. The connection between constants of motion, symmetries, and the Hamilton-Jacobi equation is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review work initiated and inspired by Sudarshan in relativistic dynamics, beam optics, partial coherence theory, Wigner distribution methods, multimode quantum optical squeezing, and geometric phases. The 1963 No Interaction Theorem using Dirac's instant form and particle World Line Conditions is recalled. Later attempts to overcome this result exploiting constrained Hamiltonian theory, reformulation of the World Line Conditions and extending Dirac's formalism, are reviewed. Dirac's front form leads to a formulation of Fourier Optics for the Maxwell field, determining the actions of First Order Systems (corresponding to matrices of Sp(2,R) and Sp(4,R)) on polarization in a consistent manner. These groups also help characterize properties and propagation of partially coherent Gaussian Schell Model beams, leading to invariant quality parameters and the new Twist phase. The higher dimensional groups Sp(2n,R) appear in the theory of Wigner distributions and in quantum optics. Elegant criteria for a Gaussian phase space function to be a Wigner distribution, expressions for multimode uncertainty principles and squeezing are described. In geometric phase theory we highlight the use of invariance properties that lead to a kinematical formulation and the important role of Bargmann invariants. Special features of these phases arising from unitary Lie group representations, and a new formulation based on the idea of Null Phase Curves, are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hollow nanostructures are used for various applications including catalysis, sensing, and drug delivery. Methods based on the Kirkendall effect have been the most successful for obtaining hollow nanostructures of various multicomponent systems. The classical Kirkendall effect relies on the presence of a faster diffusing species in the core; the resultant imbalance in flux results in the formation of hollow structures. Here, an alternate non-Kirkendall mechanism that is operative for the formation of hollow single crystalline particles of intermetallic PtBi is demonstrated. The synthesis method involves sequential reduction of Pt and Bi salts in ethylene glycol under microwave irradiation. Detailed analysis of the reaction at various stages indicates that the formation of the intermetallic PtBi hollow nanoparticles occurs in steps. The mechanistic details are elucidated using control experiments. The use of microwave results in a very rapid synthesis of intermetallics PtBi that exhibits excellent electrocatalytic activity for formic acid oxidation reaction. The method presented can be extended to various multicomponent systems and is independent of the intrinsic diffusivities of the species involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.

Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.

Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.