86 resultados para Quasi-stationary Distributions
Resumo:
We use the deformed sine-Gordon models recently presented by Bazeia et al [1] to take the first steps towards defining the concept of quasi-integrability. We consider one such definition and use it to calculate an infinite number of quasi-conserved quantities through a modification of the usual techniques of integrable field theories. Performing an expansion around the sine-Gordon theory we are able to evaluate the charges and the anomalies of their conservation laws in a perturbative power series in a small parameter which describes the ""closeness"" to the integrable sine-Gordon model. We show that in the case of the two-soliton scattering the charges, up to first order of perturbation, are conserved asymptotically, i.e. their values are the same in the distant past and future, when the solitons are well separated. We indicate that this property may hold or not to higher orders depending on the behavior of the two-soliton solution under a special parity transformation. For closely bound systems, such as breather-like field configurations, the situation however is more complex and perhaps the anomalies have a different structure implying that the concept of quasi-integrability does not apply in the same way as in the scattering of solitons. We back up our results with the data of many numerical simulations which also demonstrate the existence of long lived breather-like and wobble-like states in these models.
Resumo:
In this paper we present our preliminary results which suggest that some field theory models are `almost` integrable; i.e. they possess a large number of `almost` conserved quantities. First we demonstrate this, in some detail, on a class of models which generalise sine-Gordon model in (1+1) dimensions. Then, we point out that many field configurations of these models look like those of the integrable systems and others are very close to being integrable. Finally we attempt to quantify these claims looking in particular, both analytically and numerically, at some long lived field configurations which resemble breathers.
Resumo:
The reconstruction of Extensive Air Showers (EAS) observed by particle detectors at the ground is based on the characteristics of observables like the lateral particle density and the arrival times. The lateral densities, inferred for different EAS components from detector data, are usually parameterised by applying various lateral distribution functions (LDFs). The LDFs are used in turn for evaluating quantities like the total number of particles or the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the lateral particle density from this assumption arising from various reasons are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. KASCADE-Grande, an extension of the former KASCADE experiment, is a multi-component Extensive Air Shower (EAS) experiment located at the Karlsruhe Institute of Technology (Campus North), Germany. The lateral distributions of charged particles are deduced from the basic information provided by the Grande scintillators - the energy deposits - first in the observation plane, then in the intrinsic shower plane. In all steps azimuthal dependences should be taken into account. As the energy deposit in the scintillators is dependent on the angles of incidence of the particles, azimuthal dependences are already involved in the first step: the conversion from the energy deposits to the charged particle density. This is done by using the Lateral Energy Correction Function (LECF) that evaluates the mean energy deposited by a charged particle taking into account the contribution of other particles (e.g. photons) to the energy deposit. By using a very fast procedure for the evaluation of the energy deposited by various particles we prepared realistic LECFs depending on the angle of incidence of the shower and on the radial and azimuthal coordinates of the location of the detector. Mapping the lateral density from the observation plane onto the intrinsic shower plane does not remove the azimuthal dependences arising from geometric and attenuation effects, in particular for inclined showers. Realistic procedures for applying correction factors are developed. Specific examples of the bias due to neglecting the azimuthal asymmetries in the conversion from the energy deposit in the Grande detectors to the lateral density of charged particles in the intrinsic shower plane are given. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper we propose a scheme for quasi-perfect state transfer in a network of dissipative harmonic oscillators. We consider ideal sender and receiver oscillators connected by a chain of nonideal transmitter oscillators coupled by nearest-neighbour resonances. From the algebraic properties of the dynamical quantities describing the evolution of the network state, we derive a criterion, fixing the coupling strengths between all the oscillators, apart from their natural frequencies, enabling perfect state transfer in the particular case of ideal transmitter oscillators. Our criterion provides an easily manipulated formula enabling perfect state transfer in the special case where the network nonidealities are disregarded. We also extend such a criterion to dissipative networks where the fidelity of the transferred state decreases due to the loss mechanisms. To circumvent almost completely the adverse effect of decoherence, we propose a protocol to achieve quasi-perfect state transfer in nonideal networks. By adjusting the common frequency of the sender and the receiver oscillators to be out of resonance with that of the transmitters, we demonstrate that the sender`s state tunnels to the receiver oscillator by virtually exciting the nonideal transmitter chain. This virtual process makes negligible the decay rate associated with the transmitter line at the expense of delaying the time interval for the state transfer process. Apart from our analytical results, numerical computations are presented to illustrate our protocol.
Resumo:
When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this note we discuss the convergence of Newton`s method for minimization. We present examples in which the Newton iterates satisfy the Wolfe conditions and the Hessian is positive definite at each step and yet the iterates converge to a non-stationary point. These examples answer a question posed by Fletcher in his 1987 book Practical methods of optimization.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
Resumo:
In this paper, a novel statistical test is introduced to compare two locally stationary time series. The proposed approach is a Wald test considering time-varying autoregressive modeling and function projections in adequate spaces. The covariance structure of the innovations may be also time- varying. In order to obtain function estimators for the time- varying autoregressive parameters, we consider function expansions in splines and wavelet bases. Simulation studies provide evidence that the proposed test has a good performance. We also assess its usefulness when applied to a financial time series.
Resumo:
This paper considers the issue of modeling fractional data observed on [0,1), (0,1] or [0,1]. Mixed continuous-discrete distributions are proposed. The beta distribution is used to describe the continuous component of the model since its density can have quite different shapes depending on the values of the two parameters that index the distribution. Properties of the proposed distributions are examined. Also, estimation based on maximum likelihood and conditional moments is discussed. Finally, practical applications that employ real data are presented.
Resumo:
In this article, we study some results related to a specific class of distributions, called skew-curved-symmetric family of distributions that depends on a parameter controlling the skewness and kurtosis at the same time. Special elements of this family which are studied include symmetric and well-known asymmetric distributions. General results are given for the score function and the observed information matrix. It is shown that the observed information matrix is always singular for some special cases. We illustrate the flexibility of this class of distributions with an application to a real dataset on characteristics of Australian athletes.
Resumo:
In this paper a new approach is considered for studying the triangular distribution using the theoretical development behind Skew distributions. Triangular distribution are obtained by a reparametrization of usual triangular distribution. Main probabilistic properties of the distribution are studied, including moments, asymmetry and kurtosis coefficients, and an stochastic representation, which provides a simple and efficient method for generating random variables. Moments estimation is also implemented. Finally, a simulation study is conducted to illustrate the behavior of the estimation approach proposed.
Resumo:
In this work we construct the stationary measure of the N species totally asymmetric simple exclusion process in a matrix product formulation. We make the connection between the matrix product formulation and the queueing theory picture of Ferrari and Martin. In particular, in the standard representation, the matrices act on the space of queue lengths. For N > 2 the matrices in fact become tensor products of elements of quadratic algebras. This enables us to give a purely algebraic proof of the stationary measure which we present for N=3.
Resumo:
The Birnbaum-Saunders (BS) model is a positively skewed statistical distribution that has received great attention in recent decades. A generalized version of this model was derived based on symmetrical distributions in the real line named the generalized BS (GBS) distribution. The R package named gbs was developed to analyze data from GBS models. This package contains probabilistic and reliability indicators and random number generators from GBS distributions. Parameter estimates for censored and uncensored data can also be obtained by means of likelihood methods from the gbs package. Goodness-of-fit and diagnostic methods were also implemented in this package in order to check the suitability of the GBS models. in this article, the capabilities and features of the gbs package are illustrated by using simulated and real data sets. Shape and reliability analyses for GBS models are presented. A simulation study for evaluating the quality and sensitivity of the estimation method developed in the package is provided and discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Consider a continuous-time Markov process with transition rates matrix Q in the state space Lambda boolean OR {0}. In In the associated Fleming-Viot process N particles evolve independently in A with transition rates matrix Q until one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Lambda is finite, we show that the empirical distribution of the particles at a fixed time converges as N -> infinity to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process with N particles converges as N -> infinity to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1/N.