934 resultados para Stochastic Monotony


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service systems are labor intensive. Further, the workload tends to vary greatly with time. Adapting the staffing levels to the workloads in such systems is nontrivial due to a large number of parameters and operational variations, but crucial for business objectives such as minimal labor inventory. One of the central challenges is to optimize the staffing while maintaining system steady-state and compliance to aggregate SLA constraints. We formulate this problem as a parametrized constrained Markov process and propose a novel stochastic optimization algorithm for solving it. Our algorithm is a multi-timescale stochastic approximation scheme that incorporates a SPSA based algorithm for ‘primal descent' and couples it with a ‘dual ascent' scheme for the Lagrange multipliers. We validate this optimization scheme on five real-life service systems and compare it with a state-of-the-art optimization tool-kit OptQuest. Being two orders of magnitude faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the issue of considering stochasticity of Grassmannian coordinates in N = 1 superspace, which was analyzed previously by Kobakhidze et al. In this stochastic supersymmetry (SUSY) framework, the soft SUSY breaking terms of the minimal supersymmetric Standard Model (MSSM) such as the bilinear Higgs mixing, trilinear coupling, as well as the gaugino mass parameters are all proportional to a single mass parameter xi, a measure of supersymmetry breaking arising out of stochasticity. While a nonvanishing trilinear coupling at the high scale is a natural outcome of the framework, a favorable signature for obtaining the lighter Higgs boson mass m(h) at 125 GeV, the model produces tachyonic sleptons or staus turning to be too light. The previous analyses took Lambda, the scale at which input parameters are given, to be larger than the gauge coupling unification scale M-G in order to generate acceptable scalar masses radiatively at the electroweak scale. Still, this was inadequate for obtaining m(h) at 125 GeV. We find that Higgs at 125 GeV is highly achievable, provided we are ready to accommodate a nonvanishing scalar mass soft SUSY breaking term similar to what is done in minimal anomaly mediated SUSY breaking (AMSB) in contrast to a pure AMSB setup. Thus, the model can easily accommodate Higgs data, LHC limits of squark masses, WMAP data for dark matter relic density, flavor physics constraints, and XENON100 data. In contrast to the previous analyses, we consider Lambda = M-G, thus avoiding any ambiguities of a post-grand unified theory physics. The idea of stochastic superspace can easily be generalized to various scenarios beyond the MSSM. DOI: 10.1103/PhysRevD.87.035022

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impact of global warming on daily rainfall is examined using atmospheric variables from five General Circulation Models (GCMs) and a stochastic downscaling model. Daily rainfall at eleven raingauges over Malaprabha catchment of India and National Center for Environmental Prediction (NCEP) reanalysis data at grid points over the catchment for a continuous time period 1971-2000 (current climate) are used to calibrate the downscaling model. The downscaled rainfall simulations obtained using GCM atmospheric variables corresponding to the IPCC-SRES (Intergovernmental Panel for Climate Change - Special Report on Emission Scenarios) A2 emission scenario for the same period are used to validate the results. Following this, future downscaled rainfall projections are constructed and examined for two 20 year time slices viz. 2055 (i.e. 2046-2065) and 2090 (i.e. 2081-2100). The model results show reasonable skill in simulating the rainfall over the study region for the current climate. The downscaled rainfall projections indicate no significant changes in the rainfall regime in this catchment in the future. More specifically, 2% decrease by 2055 and 5% decrease by 2090 in monsoon (HAS) rainfall compared to the current climate (1971-2000) under global warming conditions are noticed. Also, pre-monsoon (JFMAM) and post-monsoon (OND) rainfall is projected to increase respectively, by 2% in 2055 and 6% in 2090 and, 2% in 2055 and 12% in 2090, over the region. On annual basis slight decreases of 1% and 2% are noted for 2055 and 2090, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological nanopores provide optimum dimensions and an optimal environment to study early aggregation kinetics of charged polyaromatic molecules in the nano-confined regime. It is expected that probing early stages of nucleation will enable us to design a strategy for supramolecular assembly and biocrystallization processes. Specifically, we have studied translocation dynamics of coronene and perylene based salts, through the alpha-hemolysin (alpha-HL) protein nanopore. The characteristic blocking events in the time-series signal are a function of concentration and bias voltage. We argue that different blocking events arise due to different aggregation processes as captured by all atomistic molecular dynamics (MD) simulations. These confinement induced aggregations of polyaromatic chromophores during the different stages of translocation are correlated with the spatial symmetry and charge distribution of the molecules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of developing privacy-preserving machine learning algorithms in a dis-tributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set with-out any party revealing any information about the individual data points it owns. Pathak et al [7]recently proposed a solution to this problem in which each party learns a local classifier from its own data, and a third party then aggregates these classifiers in a privacy-preserving manner using a cryptographic scheme. The generaliza-tion performance of their algorithm is sensitive to the number of parties and the relative frac-tions of data owned by the different parties. In this paper, we describe a new differentially pri-vate algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty ob-jective rather than combining classifiers learned from optimizing local objectives. The algorithm achieves a slightly weaker form of differential privacy than that of [7], but provides improved generalization guarantees that do not depend on the number of parties or the relative sizes of the individual data sets. Experimental results corrob-orate our theoretical findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled Markov chains with countably many states are analyzed. Upper and lower values for these games are established. The existence of value and saddle-point equilibria in the class of Markov strategies is proved for the discounted-cost game. The existence of value and saddle-point equilibria in the class of stationary strategies is proved under the uniform ergodicity condition for the ergodic-cost game. The value of the ergodic-cost game happens to be the product of the inverse of the risk-sensitivity factor and the logarithm of the common Perron-Frobenius eigenvalue of the associated controlled nonlinear kernels. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel form of nonlinear stochastic filtering based on an iterative evaluation of a Kalman-like gain matrix computed within a Monte Carlo scheme as suggested by the form of the parent equation of nonlinear filtering (Kushner-Stratonovich equation) and retains the simplicity of implementation of an ensemble Kalman filter (EnKF). The numerical results, presently obtained via EnKF-like simulations with or without a reduced-rank unscented transformation, clearly indicate remarkably superior filter convergence and accuracy vis-a-vis most available filtering schemes and eminent applicability of the methods to higher dimensional dynamic system identification problems of engineering interest. (C) 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a discrete time partially observable zero-sum stochastic game with average payoff criterion. We study the game using an equivalent completely observable game. We show that the game has a value and also we present a pair of optimal strategies for both the players.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop iterative diffraction tomography algorithms, which are similar to the distorted Born algorithms, for inverting scattered intensity data. Within the Born approximation, the unknown scattered field is expressed as a multiplicative perturbation to the incident field. With this, the forward equation becomes stable, which helps us compute nearly oscillation-free solutions that have immediate bearing on the accuracy of the Jacobian computed for use in a deterministic Gauss-Newton (GN) reconstruction. However, since the data are inherently noisy and the sensitivity of measurement to refractive index away from the detectors is poor, we report a derivative-free evolutionary stochastic scheme, providing strictly additive updates in order to bridge the measurement-prediction misfit, to arrive at the refractive index distribution from intensity transport data. The superiority of the stochastic algorithm over the GN scheme for similar settings is demonstrated by the reconstruction of the refractive index profile from simulated and experimentally acquired intensity data. (C) 2014 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Infinite arrays of coupled two-state stochastic oscillators exhibit well-defined steady states. We study the fluctuations that occur when the number N of oscillators in the array is finite. We choose a particular form of global coupling that in the infinite array leads to a pitchfork bifurcation from a monostable to a bistable steady state, the latter with two equally probable stationary states. The control parameter for this bifurcation is the coupling strength. In finite arrays these states become metastable: The fluctuations lead to distributions around the most probable states, with one maximum in the monostable regime and two maxima in the bistable regime. In the latter regime, the fluctuations lead to transitions between the two peak regions of the distribution. Also, we find that the fluctuations break the symmetry in the bimodal regime, that is, one metastable state becomes more probable than the other, increasingly so with increasing array size. To arrive at these results, we start from microscopic dynamical evolution equations from which we derive a Langevin equation that exhibits an interesting multiplicative noise structure. We also present a master equation description of the dynamics. Both of these equations lead to the same Fokker-Planck equation, the master equation via a 1/N expansion and the Langevin equation via standard methods of Ito calculus for multiplicative noise. From the Fokker-Planck equation we obtain an effective potential that reflects the transition from the monomodal to the bimodal distribution as a function of a control parameter. We present a variety of numerical and analytic results that illustrate the strong effects of the fluctuations. We also show that the limits N -> infinity and t -> infinity(t is the time) do not commute. In fact, the two orders of implementation lead to drastically different results.