837 resultados para Robust stochastic approximation
Resumo:
Different axioms underlie efficient market theory and Keynes's liquidity preference theory. Efficient market theory assumes the ergodic axiom. Consequently, today's decision makers can calculate with actuarial precision the future value of all possible outcomes resulting from today's decisions. Since in an efficient market world decision makers "know" their intertemporal budget constraints, decision makers never default on a loan, i.e., systemic defaults, insolvencies, and bankruptcies are impossible. Keynes liquidity preference theory rejects the ergodic axiom. The future is ontologically uncertain. Accordingly systemic defaults and insolvencies can occur but can never be predicted in advance.
Resumo:
Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.
Stochastic particle models: mean reversion and burgers dynamics. An application to commodity markets
Resumo:
The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.
Resumo:
We examined three different algorithms used in diffusion Monte Carlo (DMC) to study their precisions and accuracies in predicting properties of isolated atoms, which are H atom ground state, Be atom ground state and H atom first excited state. All three algorithms — basic DMC, minimal stochastic reconfiguration DMC, and pure DMC, each with future-walking, are successfully impletmented in ground state energy and simple moments calculations with satisfactory results. Pure diffusion Monte Carlo with future-walking algorithm is proven to be the simplest approach with the least variance. Polarizabilities for Be atom ground state and H atom first excited state are not satisfactorily estimated in the infinitesimal differentiation approach. Likewise, an approach using the finite field approximation with an unperturbed wavefunction for the latter system also fails. However, accurate estimations for the a-polarizabilities are obtained by using wavefunctions that come from the time-independent perturbation theory. This suggests the flaw in our approach to polarizability estimation for these difficult cases rests with our having assumed the trial function is unaffected by infinitesimal perturbations in the Hamiltonian.
Resumo:
Expressions for the anharmonic Helmholtz free energy contributions up to o( f ) ,valid for all temperatures, have been obtained using perturbation theory for a c r ystal in which every atom is on a site of inversion symmetry. Numerical calculations have been carried out in the high temperature limit and in the non-leading term approximation for a monatomic facecentred cubic crystal with nearest neighbour c entralforce interactions. The numbers obtained were seen to vary by a s much as 47% from thos e obtai.ned in the leading term approximati.on,indicating that the latter approximati on is not in general very good. The convergence to oct) of the perturbation series in the high temperature limit appears satisfactory.
Resumo:
Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.
Resumo:
A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.
Resumo:
Port Dalhousie and Thorold Railway estimate of work done to date with an approximation of probable damage sustained by suspending the track, Aug. 22, 1854.
Asymmetry Risk, State Variables and Stochastic Discount Factor Specification in Asset Pricing Models
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
Ce Texte Presente Plusieurs Resultats Exacts Sur les Seconds Moments des Autocorrelations Echantillonnales, Pour des Series Gaussiennes Ou Non-Gaussiennes. Nous Donnons D'abord des Formules Generales Pour la Moyenne, la Variance et les Covariances des Autocorrelations Echantillonnales, Dans le Cas Ou les Variables de la Serie Sont Interchangeables. Nous Deduisons de Celles-Ci des Bornes Pour les Variances et les Covariances des Autocorrelations Echantillonnales. Ces Bornes Sont Utilisees Pour Obtenir des Limites Exactes Sur les Points Critiques Lorsqu'on Teste le Caractere Aleatoire D'une Serie Chronologique, Sans Qu'aucune Hypothese Soit Necessaire Sur la Forme de la Distribution Sous-Jacente. Nous Donnons des Formules Exactes et Explicites Pour les Variances et Covariances des Autocorrelations Dans le Cas Ou la Serie Est un Bruit Blanc Gaussien. Nous Montrons Que Ces Resultats Sont Aussi Valides Lorsque la Distribution de la Serie Est Spheriquement Symetrique. Nous Presentons les Resultats D'une Simulation Qui Indiquent Clairement Qu'on Approxime Beaucoup Mieux la Distribution des Autocorrelations Echantillonnales En Normalisant Celles-Ci Avec la Moyenne et la Variance Exactes et En Utilisant la Loi N(0,1) Asymptotique, Plutot Qu'en Employant les Seconds Moments Approximatifs Couramment En Usage. Nous Etudions Aussi les Variances et Covariances Exactes D'autocorrelations Basees Sur les Rangs des Observations.
Resumo:
This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.