175 resultados para Stochastic adding machine


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unlike zero-sum stochastic games, a difficult problem in general-sum stochastic games is to obtain verifiable conditions for Nash equilibria. We show in this paper that by splitting an associated non-linear optimization problem into several sub-problems, characterization of Nash equilibria in a general-sum discounted stochastic games is possible. Using the aforementioned sub-problems, we in fact derive a set of necessary and sufficient verifiable conditions (termed KKT-SP conditions) for a strategy-pair to result in Nash equilibrium. Also, we show that any algorithm which tracks the zero of the gradient of the Lagrangian of every sub-problem provides a Nash strategy-pair. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service systems are labor intensive. Further, the workload tends to vary greatly with time. Adapting the staffing levels to the workloads in such systems is nontrivial due to a large number of parameters and operational variations, but crucial for business objectives such as minimal labor inventory. One of the central challenges is to optimize the staffing while maintaining system steady-state and compliance to aggregate SLA constraints. We formulate this problem as a parametrized constrained Markov process and propose a novel stochastic optimization algorithm for solving it. Our algorithm is a multi-timescale stochastic approximation scheme that incorporates a SPSA based algorithm for ‘primal descent' and couples it with a ‘dual ascent' scheme for the Lagrange multipliers. We validate this optimization scheme on five real-life service systems and compare it with a state-of-the-art optimization tool-kit OptQuest. Being two orders of magnitude faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we present a statistical approach inspired by stylometry -measurement of author style- to study the characteristics of machine translators. Our approach quantifies the style of a translator in terms of the properties derived from the distribution of stopwords in its output - a standard approach in modern stylometry. Our study enables us to match translated text to the source machine translator that generated them. Also, the stylometric closeness of human generated text to that generated by machine translators provides handles to assess the quality of machine translators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the issue of considering stochasticity of Grassmannian coordinates in N = 1 superspace, which was analyzed previously by Kobakhidze et al. In this stochastic supersymmetry (SUSY) framework, the soft SUSY breaking terms of the minimal supersymmetric Standard Model (MSSM) such as the bilinear Higgs mixing, trilinear coupling, as well as the gaugino mass parameters are all proportional to a single mass parameter xi, a measure of supersymmetry breaking arising out of stochasticity. While a nonvanishing trilinear coupling at the high scale is a natural outcome of the framework, a favorable signature for obtaining the lighter Higgs boson mass m(h) at 125 GeV, the model produces tachyonic sleptons or staus turning to be too light. The previous analyses took Lambda, the scale at which input parameters are given, to be larger than the gauge coupling unification scale M-G in order to generate acceptable scalar masses radiatively at the electroweak scale. Still, this was inadequate for obtaining m(h) at 125 GeV. We find that Higgs at 125 GeV is highly achievable, provided we are ready to accommodate a nonvanishing scalar mass soft SUSY breaking term similar to what is done in minimal anomaly mediated SUSY breaking (AMSB) in contrast to a pure AMSB setup. Thus, the model can easily accommodate Higgs data, LHC limits of squark masses, WMAP data for dark matter relic density, flavor physics constraints, and XENON100 data. In contrast to the previous analyses, we consider Lambda = M-G, thus avoiding any ambiguities of a post-grand unified theory physics. The idea of stochastic superspace can easily be generalized to various scenarios beyond the MSSM. DOI: 10.1103/PhysRevD.87.035022

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates the application of a new technique, based on Support Vector Clustering (SVC) for the direct identification of coherent synchronous generators in a large interconnected Multi-Machine Power Systems. The clustering is based on coherency measures, obtained from the time domain responses of the generators following system disturbances. The proposed clustering algorithm could be integrated into a wide-area measurement system that enables fast identification of coherent clusters of generators for the construction of dynamic equivalent models. An application of the proposed method is demonstrated on a practical 15 generators 72-bus system, an equivalent of Indian Southern grid in an attempt to show the effectiveness of this clustering approach. The effects of short circuit fault locations on coherency are also investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a fast and accurate relaying technique for a long 765kv UHV transmission line based on support vector machine. For a long EHV/UHV transmission line with large distributed capacitance, a traditional distance relay which uses a lumped parameter model of the transmission line can cause malfunction of the relay. With a frequency of 1kHz, 1/4th cycle of instantaneous values of currents and voltages of all phases at the relying end are fed to Support Vector Machine(SVM). The SVM detects fault type accurately using 3 milliseconds of post-fault data and reduces the fault clearing time which improves the system stability and power transfer capability. The performance of relaying scheme has been checked with a typical 765kV Indian transmission System which is simulated using the Electromagnetic Transients Program(EMTP) developed by authors in which the distributed parameter line model is used. More than 15,000 different short circuit fault cases are simulated by varying fault location, fault impedance, fault incidence angle and fault type to train the SVM for high speed accurate relaying. Simulation studies have shown that the proposed relay provides fast and accurate protection irrespective of fault location, fault impedance, incidence time of fault and fault type. And also the proposed scheme can be used as augmentation for the existing relaying, particularly for Zone-2, Zone-3 protection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impact of global warming on daily rainfall is examined using atmospheric variables from five General Circulation Models (GCMs) and a stochastic downscaling model. Daily rainfall at eleven raingauges over Malaprabha catchment of India and National Center for Environmental Prediction (NCEP) reanalysis data at grid points over the catchment for a continuous time period 1971-2000 (current climate) are used to calibrate the downscaling model. The downscaled rainfall simulations obtained using GCM atmospheric variables corresponding to the IPCC-SRES (Intergovernmental Panel for Climate Change - Special Report on Emission Scenarios) A2 emission scenario for the same period are used to validate the results. Following this, future downscaled rainfall projections are constructed and examined for two 20 year time slices viz. 2055 (i.e. 2046-2065) and 2090 (i.e. 2081-2100). The model results show reasonable skill in simulating the rainfall over the study region for the current climate. The downscaled rainfall projections indicate no significant changes in the rainfall regime in this catchment in the future. More specifically, 2% decrease by 2055 and 5% decrease by 2090 in monsoon (HAS) rainfall compared to the current climate (1971-2000) under global warming conditions are noticed. Also, pre-monsoon (JFMAM) and post-monsoon (OND) rainfall is projected to increase respectively, by 2% in 2055 and 6% in 2090 and, 2% in 2055 and 12% in 2090, over the region. On annual basis slight decreases of 1% and 2% are noted for 2055 and 2090, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clustering has been the most popular method for data exploration. Clustering is partitioning the data set into sub-partitions based on some measures say the distance measure, each partition has its own significant information. There are a number of algorithms explored for this purpose, one such algorithm is the Particle Swarm Optimization(PSO) which is a population based heuristic search technique derived from swarm intelligence. In this paper we present an improved version of the Particle Swarm Optimization where, each feature of the data set is given significance accordingly by adding some random weights, which also minimizes the distortions in the dataset if any. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The experimental results shows that our proposed methodology performs significantly better than the previously performed experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological nanopores provide optimum dimensions and an optimal environment to study early aggregation kinetics of charged polyaromatic molecules in the nano-confined regime. It is expected that probing early stages of nucleation will enable us to design a strategy for supramolecular assembly and biocrystallization processes. Specifically, we have studied translocation dynamics of coronene and perylene based salts, through the alpha-hemolysin (alpha-HL) protein nanopore. The characteristic blocking events in the time-series signal are a function of concentration and bias voltage. We argue that different blocking events arise due to different aggregation processes as captured by all atomistic molecular dynamics (MD) simulations. These confinement induced aggregations of polyaromatic chromophores during the different stages of translocation are correlated with the spatial symmetry and charge distribution of the molecules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigates the application of support vector clustering (SVC) for the direct identification of coherent synchronous generators in large interconnected multi-machine power systems. The clustering is based on coherency measure, which indicates the degree of coherency between any pair of generators. The proposed SVC algorithm processes the coherency measure matrix that is formulated using the generator rotor measurements to cluster the coherent generators. The proposed approach is demonstrated on IEEE 10 generator 39-bus system and an equivalent 35 generators, 246-bus system of practical Indian southern grid. The effect of number of data samples and fault locations are also examined for determining the accuracy of the proposed approach. An extended comparison with other clustering techniques is also included, to show the effectiveness of the proposed approach in grouping the data into coherent groups of generators. This effectiveness of the coherent clusters obtained with the proposed approach is compared in terms of a set of clustering validity indicators and in terms of statistical assessment that is based on the coherency degree of a generator pair.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.