963 resultados para stochastic expansion


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A scheme for stabilizing stochastic approximation iterates by adaptively scaling the step sizes is proposed and analyzed. This scheme leads to the same limiting differential equation as the original scheme and therefore has the same limiting behavior, while avoiding the difficulties associated with projection schemes. The proof technique requires only that the limiting o.d.e. descend a certain Lyapunov function outside an arbitrarily large bounded set. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laminar forced convection heat transfer from two-dimensional sudden expansion flow of different nanofluids is studied numerically. The governing equations are solved using the unsteady stream function-vorticity method. The effect of volume fraction of the nanoparticles and type of nanoparticles on heat transfer is examined and found to have a significant impact. Local and average Nusselt numbers are reported in connection with various nanoparticle, volume fraction, and Reynolds number for expansion ratio 2. The Nusselt number reaches peak values near the reattachment point and reaches asymptotic value in the downstream. Bottom wall eddy and volume fraction shows a significant impact on the average Nusselt number.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The repeated or closely spaced eigenvalues and corresponding eigenvectors of a matrix are usually very sensitive to a perturbation of the matrix, which makes capturing the behavior of these eigenpairs very difficult. Similar difficulty is encountered in solving the random eigenvalue problem when a matrix with random elements has a set of clustered eigenvalues in its mean. In addition, the methods to solve the random eigenvalue problem often differ in characterizing the problem, which leads to different interpretations of the solution. Thus, the solutions obtained from different methods become mathematically incomparable. These two issues, the difficulty of solving and the non-unique characterization, are addressed here. A different approach is used where instead of tracking a few individual eigenpairs, the corresponding invariant subspace is tracked. The spectral stochastic finite element method is used for analysis, where the polynomial chaos expansion is used to represent the random eigenvalues and eigenvectors. However, the main concept of tracking the invariant subspace remains mostly independent of any such representation. The approach is successfully implemented in response prediction of a system with repeated natural frequencies. It is found that tracking only an invariant subspace could be sufficient to build a modal-based reduced-order model of the system. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a model of identical coupled two-state stochastic units, each of which in isolation is governed by a fixed refractory period. The nonlinear coupling between units directly affects the refractory period, which now depends on the global state of the system and can therefore itself become time dependent. At weak coupling the array settles into a quiescent stationary state. Increasing coupling strength leads to a saddle node bifurcation, beyond which the quiescent state coexists with a stable limit cycle of nonlinear coherent oscillations. We explicitly determine the critical coupling constant for this transition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider the problem of computing numerical solutions for stochastic differential equations (SDEs) of Ito form. A fully explicit method, the split-step forward Milstein (SSFM) method, is constructed for solving SDEs. It is proved that the SSFM method is convergent with strong order gamma = 1 in the mean-square sense. The analysis of stability shows that the mean-square stability properties of the method proposed in this paper are an improvement on the mean-square stability properties of the Milstein method and three stage Milstein methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study zero-sum risk-sensitive stochastic differential games on the infinite horizon with discounted and ergodic payoff criteria. Under certain assumptions, we establish the existence of values and saddle-point equilibria. We obtain our results by studying the corresponding Hamilton-Jacobi-Isaacs equations. Finally, we show that the value of the ergodic payoff criterion is a constant multiple of the maximal eigenvalue of the generators of the associated nonlinear semigroups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we address stochastic differential games of mixed type with both control and stopping times. Under standard assumptions, we show that the value of the game can be characterized as the unique viscosity solution of corresponding Hamilton-Jacobi-Isaacs (HJI) variational inequalities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, mu(c), however, is not known. Application of the quasispecies theory to determine mu(c) poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and mu(c). We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated mu(c) to be 7 x 10(-5) -1 x 10(-4) substitutions/site/replication, similar to 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, mu(c) increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of mu(c) may serve as a quantitative guideline for the use of mutagenic drugs against HIV-1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unlike zero-sum stochastic games, a difficult problem in general-sum stochastic games is to obtain verifiable conditions for Nash equilibria. We show in this paper that by splitting an associated non-linear optimization problem into several sub-problems, characterization of Nash equilibria in a general-sum discounted stochastic games is possible. Using the aforementioned sub-problems, we in fact derive a set of necessary and sufficient verifiable conditions (termed KKT-SP conditions) for a strategy-pair to result in Nash equilibrium. Also, we show that any algorithm which tracks the zero of the gradient of the Lagrangian of every sub-problem provides a Nash strategy-pair. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the large-order behavior of a recently proposed renormalization-group-improved expansion of the Adler function in perturbative QCD, which sums in an analytically closed form the leading logarithms accessible from renormalization-group invariance. The expansion is first written as an effective series in powers of the one-loop coupling, and its leading singularities in the Borel plane are shown to be identical to those of the standard ``contour-improved'' expansion. Applying the technique of conformal mappings for the analytic continuation in the Borel plane, we define a class of improved expansions, which implement both the renormalization-group invariance and the knowledge about the large-order behavior of the series. Detailed numerical studies of specific models for the Adler function indicate that the new expansions have remarkable convergence properties up to high orders. Using these expansions for the determination of the strong coupling from the hadronic width of the tau lepton we obtain, with a conservative estimate of the uncertainty due to the nonperturbative corrections, alpha(s)(M-tau(2)) = 0.3189(-0.0151)(+0.0173), which translates to alpha(s)(M-Z(2)) = 0.1184(-0.0018)(+0.0021). DOI: 10.1103/PhysRevD.87.014008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laminar two-dimensional sudden expansion flow of different nanofluids is studied numerically. The governing equations are solved using stream function-vorticity method. The effect of volume fraction of the nanoparticles and type of nanoparticles on flow behaviour is examined and found significant impact. The flow response to Reynolds number in the presence of nanoparticles is examined. The presence of nanoparticles decreases the flow bifurcation Reynolds number. The size and the reattachment length of the bottom wall recirculation increase with increasing volume fraction and particle density. The effect of volume fraction and density of nanoparticles on friction factor is reported. The bottom wall recirculation strongly respond to the variation in volume faction and type of particles. However, weak response is observed for top wall recirculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service systems are labor intensive. Further, the workload tends to vary greatly with time. Adapting the staffing levels to the workloads in such systems is nontrivial due to a large number of parameters and operational variations, but crucial for business objectives such as minimal labor inventory. One of the central challenges is to optimize the staffing while maintaining system steady-state and compliance to aggregate SLA constraints. We formulate this problem as a parametrized constrained Markov process and propose a novel stochastic optimization algorithm for solving it. Our algorithm is a multi-timescale stochastic approximation scheme that incorporates a SPSA based algorithm for ‘primal descent' and couples it with a ‘dual ascent' scheme for the Lagrange multipliers. We validate this optimization scheme on five real-life service systems and compare it with a state-of-the-art optimization tool-kit OptQuest. Being two orders of magnitude faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.