985 resultados para STOCHASTIC MODELING


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of estimation of the time-variant reliability of actively controlled structural dynamical systems under stochastic excitations is considered. Monte Carlo simulations, reinforced with Girsanov transformation-based sampling variance reduction, are used to tackle the problem. In this approach, the external excitations are biased by an additional artificial control force. The conflicting objectives of the two control forces-one designed to reduce structural responses and the other to promote limit-state violations (but to reduce sampling variance)-are noted. The control for variance reduction is fashioned after design-point oscillations based on a first-order reliability method. It is shown that for structures that are amenable to laboratory testing, the reliability can be estimated experimentally with reduced testing times by devising a procedure based on the ideas of the Girsanov transformation. Illustrative examples include studies on a building frame with a magnetorheologic damper-based isolation system subject to nonstationary random earthquake excitations. (C) 2014 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.

Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.

The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets. © 2010 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transition of the mammalian cell from quiescence to proliferation is a highly variable process. Over the last four decades, two lines of apparently contradictory, phenomenological models have been proposed to account for such temporal variability. These include various forms of the transition probability (TP) model and the growth control (GC) model, which lack mechanistic details. The GC model was further proposed as an alternative explanation for the concept of the restriction point, which we recently demonstrated as being controlled by a bistable Rb-E2F switch. Here, through a combination of modeling and experiments, we show that these different lines of models in essence reflect different aspects of stochastic dynamics in cell cycle entry. In particular, we show that the variable activation of E2F can be described by stochastic activation of the bistable Rb-E2F switch, which in turn may account for the temporal variability in cell cycle entry. Moreover, we show that temporal dynamics of E2F activation can be recast into the frameworks of both the TP model and the GC model via parameter mapping. This mapping suggests that the two lines of phenomenological models can be reconciled through the stochastic dynamics of the Rb-E2F switch. It also suggests a potential utility of the TP or GC models in defining concise, quantitative phenotypes of cell physiology. This may have implications in classifying cell types or states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational detection of regulatory elements in DNA is a difficult but important problem impacting our progress in understanding the complex nature of eukaryotic gene regulation. Attempts to utilize cross-species conservation for this task have been hampered both by evolutionary changes of functional sites and poor performance of general-purpose alignment programs when applied to non-coding sequence. We describe a new and flexible framework for modeling binding site evolution in multiple related genomes, based on phylogenetic pair hidden Markov models which explicitly model the gain and loss of binding sites along a phylogeny. We demonstrate the value of this framework for both the alignment of regulatory regions and the inference of precise binding-site locations within those regions. As the underlying formalism is a stochastic, generative model, it can also be used to simulate the evolution of regulatory elements. Our implementation is scalable in terms of numbers of species and sequence lengths and can produce alignments and binding-site predictions with accuracy rivaling or exceeding current systems that specialize in only alignment or only binding-site prediction. We demonstrate the validity and power of various model components on extensive simulations of realistic sequence data and apply a specific model to study Drosophila enhancers in as many as ten related genomes and in the presence of gain and loss of binding sites. Different models and modeling assumptions can be easily specified, thus providing an invaluable tool for the exploration of biological hypotheses that can drive improvements in our understanding of the mechanisms and evolution of gene regulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a model of short-range ballistic missile defense and uses it to study the performance of Israel’s Iron Dome system. The deterministic base model allows for inaccurate missiles, unsuccessful interceptions, and civil defense. Model enhancements consider the trade-offs in attacking the interception system, the difficulties faced by militants in assembling large salvos, and the effects of imperfect missile classification by the defender. A stochastic model is also developed. Analysis shows that system performance can be highly sensitive to the missile salvo size, and that systems with higher interception rates are more “fragile” when overloaded. The model is calibrated using publically available data about Iron Dome’s use during Operation Pillar of Defense in November 2012. If the systems performed as claimed, they saved Israel an estimated 1778 casualties and $80 million in property damage, and thereby made preemptive strikes on Gaza about 8 times less valuable to Israel. Gaza militants could have inflicted far more damage by grouping their rockets into large salvos, but this may have been difficult given Israel’s suppression efforts. Counter-battery fire by the militants is unlikely to be worthwhile unless they can obtain much more accurate missiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il a été démontré que l’hétérotachie, variation du taux de substitutions au cours du temps et entre les sites, est un phénomène fréquent au sein de données réelles. Échouer à modéliser l’hétérotachie peut potentiellement causer des artéfacts phylogénétiques. Actuellement, plusieurs modèles traitent l’hétérotachie : le modèle à mélange des longueurs de branche (MLB) ainsi que diverses formes du modèle covarion. Dans ce projet, notre but est de trouver un modèle qui prenne efficacement en compte les signaux hétérotaches présents dans les données, et ainsi améliorer l’inférence phylogénétique. Pour parvenir à nos fins, deux études ont été réalisées. Dans la première, nous comparons le modèle MLB avec le modèle covarion et le modèle homogène grâce aux test AIC et BIC, ainsi que par validation croisée. A partir de nos résultats, nous pouvons conclure que le modèle MLB n’est pas nécessaire pour les sites dont les longueurs de branche diffèrent sur l’ensemble de l’arbre, car, dans les données réelles, le signaux hétérotaches qui interfèrent avec l’inférence phylogénétique sont généralement concentrés dans une zone limitée de l’arbre. Dans la seconde étude, nous relaxons l’hypothèse que le modèle covarion est homogène entre les sites, et développons un modèle à mélanges basé sur un processus de Dirichlet. Afin d’évaluer différents modèles hétérogènes, nous définissons plusieurs tests de non-conformité par échantillonnage postérieur prédictif pour étudier divers aspects de l’évolution moléculaire à partir de cartographies stochastiques. Ces tests montrent que le modèle à mélanges covarion utilisé avec une loi gamma est capable de refléter adéquatement les variations de substitutions tant à l’intérieur d’un site qu’entre les sites. Notre recherche permet de décrire de façon détaillée l’hétérotachie dans des données réelles et donne des pistes à suivre pour de futurs modèles hétérotaches. Les tests de non conformité par échantillonnage postérieur prédictif fournissent des outils de diagnostic pour évaluer les modèles en détails. De plus, nos deux études révèlent la non spécificité des modèles hétérogènes et, en conséquence, la présence d’interactions entre différents modèles hétérogènes. Nos études suggèrent fortement que les données contiennent différents caractères hétérogènes qui devraient être pris en compte simultanément dans les analyses phylogénétiques.