980 resultados para stochastic approximation algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An equivalent algorithm is proposed to simulate thermal effects of the magma intrusion in geological systems, which are composed of porous rocks. Based on the physical and mathematical equivalence, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with a physically equivalent heat source. From the analysis of an ideal solidification model, the physically equivalent heat source has been determined in this paper. The major advantage in using the proposed equivalent algorithm is that the fixed finite element mesh with a variable integration time step can be employed to simulate the thermal effect of the intruded magma solidification using the conventional finite element method. The related numerical results have demonstrated the correctness and usefulness of the proposed equivalent algorithm for simulating the thermal effect of the intruded magma solidification in geological systems. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A graph clustering algorithm constructs groups of closely related parts and machines separately. After they are matched for the least intercell moves, a refining process runs on the initial cell formation to decrease the number of intercell moves. A simple modification of this main approach can deal with some practical constraints, such as the popular constraint of bounding the maximum number of machines in a cell. Our approach makes a big improvement in the computational time. More importantly, improvement is seen in the number of intercell moves when the computational results were compared with best known solutions from the literature. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most previous investigations on tide-induced watertable fluctuations in coastal aquifers have been based on one-dimensional models that describe the processes in the cross-shore direction alone, assuming negligible along-shore variability. A recent study proposed a two-dimensional approximation for tide-induced watertable fluctuations that took into account coastline variations. Here, we further develop this approximation in two ways, by extending the approximation to second order and by taking into account capillary effects. Our results demonstrate that both effects can markedly influence watertable fluctuations. In particular, with the first-order approximation, the local damping rate of the tidal signal could be subject to sizable errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solidification of intruded magma in porous rocks can result in the following two consequences: (1) the heat release due to the solidification of the interface between the rock and intruded magma and (2) the mass release of the volatile fluids in the region where the intruded magma is solidified into the rock. Traditionally, the intruded magma solidification problem is treated as a moving interface (i.e. the solidification interface between the rock and intruded magma) problem to consider these consequences in conventional numerical methods. This paper presents an alternative new approach to simulate thermal and chemical consequences/effects of magma intrusion in geological systems, which are composed of porous rocks. In the proposed new approach and algorithm, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with the proposed mass source and physically equivalent heat source. The major advantage in using the proposed equivalent algorithm is that a fixed mesh of finite elements with a variable integration time-step can be employed to simulate the consequences and effects of the intruded magma solidification using the conventional finite element method. The correctness and usefulness of the proposed equivalent algorithm have been demonstrated by a benchmark magma solidification problem. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We identify a test of quantum mechanics versus macroscopic local realism in the form of stochastic electrodynamics. The test uses the steady-state triple quadrature correlations of a parametric oscillator below threshold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers a stochastic frontier production function which has additive, heteroscedastic error structure. The model allows for negative or positive marginal production risks of inputs, as originally proposed by Just and Pope (1978). The technical efficiencies of individual firms in the sample are a function of the levels of the input variables in the stochastic frontier, in addition to the technical inefficiency effects. These are two features of the model which are not exhibited by the commonly used stochastic frontiers with multiplicative error structures, An empirical application is presented using cross-sectional data on Ethiopian peasant farmers. The null hypothesis of no technical inefficiencies of production among these farmers is accepted. Further, the flexible risk models do not fit the data on peasant farmers as well as the traditional stochastic frontier model with multiplicative error structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Although various techniques have been used for breast conservation surgery reconstruction, there are few studies describing a logical approach to reconstruction of these defects. The objectives of this study were to establish a classification system for partial breast defects and to develop a reconstructive algorithm. Methods: The authors reviewed a 7-year experience with 209 immediate breast conservation surgery reconstructions. Mean follow-up was 31 months. Type I defects include tissue resection in smaller breasts (bra size A/B), including type IA, which involves minimal defects that do not cause distortion; type III, which involves moderate defects that cause moderate distortion; and type IC, which involves large defects that cause significant deformities. Type II includes tissue resection in medium-sized breasts with or without ptosis (bra size C), and type III includes tissue resection in large breasts with ptosis (bra size D). Results: Eighteen percent of patients presented type I, where a lateral thoracodorsal flap and a latissimus dorsi flap were performed in 68 percent. Forty-five percent presented type II defects, where bilateral mastopexy was performed in 52 percent. Thirty-seven percent of patients presented type III distortion, where bilateral reduction mammaplasty was performed in 67 percent. Thirty-five percent of patients presented complications, and most were minor. Conclusions: An algorithm based on breast size in relation to tumor location and extension of resection can be followed to determine the best approach to reconstruction. The authors` results have demonstrated that the complications were similar to those in other clinical series. Success depends on patient selection, coordinated planning with the oncologic surgeon, and careful intraoperative management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.