973 resultados para Approximations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the dynamics of the Rayleigh piston using the modeling tools of Fractional Calculus. Several numerical experiments examine the effect of distinct values of the parameters. The time responses are transformed into the Fourier domain and approximated by means of power law approximations. The description reveals characteristics usual in Fractional Brownian phenomena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article studies the intercultural trajectory of a Portuguese female aristocrat of the eighteenth to nineteenth centuries. Her trajectory of intercultural transition from a Portuguese provincial lady into an independent owner of a sugar mill in tropical Bahia is documented through family letters, which provide a polyphonic representation of a movement of personal, family, and social transculturation over almost two decades. Maria Bárbara began her journey between cultures as a simple spectator-reader, progressively becoming a commentator-actor-protagonist-author in society, in politics, and in history. These letters function as a translation that is sometimes consecutive, other times simultaneous, of the events lived and witnessed. This concept of intercultural translation is based on the theories of Boaventura de Sousa Santos (2006, 2008), who argues that cultural differences imply that any comparison has to be made using procedures of proportion and correspondence which, taken as a whole, constitute the work of translation itself. These procedures construct approximations of the known to the unknown, of the strange to the familiar, of the ‘other’ to the ‘self’, categories which are always unstable. Likewise, this essay explores the unstable contexts of its object of study, with the purpose of understanding different rationalities and worldviews.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The capability of molecular mechanics for modeling the wide distribution of bond angles and bond lengths characteristic of coordination complexes was investigatecl. This was the preliminary step for future modeling of solvent extraction. Several tin-phosphine oxide COrnI)le:){es were selected as the test groUl) for t.he d,esired range of geometry they eX!libi ted as \-vell as the ligands they cOD.tained r Wllich were c\f interest in connection with solvation. A variety of adjustments were made to Allinger's M:M2 force·-field ill order to inl.prove its performance in the treatment of these systems. A set of u,nique force constants was introduced for' those terms representing the metal ligand bond lengths, bond angles, and, torsion angles. These were significantly smaller than trad.itionallY used. with organic compounds. The ~1orse poteIlt.ial energ'Y function was incorporated for the M-X l')ond lE~ngths and the cosine harmonic potential erlerg-y function was invoked for the MOP bond angle. These functions were found to accomodate the wide distribution of observed values better than the traditional harmonic approximations~ Crystal packing influences on the MOP angle were explored thr"ollgh ttle inclusion of the isolated molecule withil1 a shell cc)ntaini11g tl1e nearest neigl1'bors duri.rlg energy rninimization experiments~ This was found to further improve the fit of the MOP angle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anharmonic, multi-phonon (MP), and Oebye-Waller factor (OW) contributions to the phonon limited resistivity (;0) of metals derived by Shukla and Muller (1979) by the doubletime temperature dependent Green function method have been numerically evaluated for Na and K in the high temperature limit. The anharmonic contributions arise from the cubic and quartic shift of phonons (CS, QS), and phonon width (W) and the interference term (1). The QS, MP and OW contributions to I' are also derived by the matrix element method and the results are in agreement with those of Shukla and Muller (1979). In the high temperature limit, the contributions to;O from each of the above mentioned terms are of the type BT2 For numerical calculations suitable expressions are derived for the anharmonic contributions to ~ in terms of the third and fourth rank tensors obtained by the Ewald procedure. The numerical calculation of the contributions to;O from the OW, MP term and the QS have been done exactly and from the CS, Wand I terms only approximately in the partial and total Einstein approximations (PEA, TEA), using a first principle approach (Shukla and Taylor (1976)). The results obtained indicate that there is a strong pairwise cancellation between the: OW and MP terms, the QS and CS and the Wand I terms. The sum total of these contributions to;O for Na and K amounts to 4 to 11% and 2 to 7%, respectively, in the PEA while in the TEA they amount to 3 to 7% and 1 to 4%, respectively, in the temperature range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The atomic mean square displacement (MSD) and the phonon dispersion curves (PDC's) of a number of face-centred cubic (fcc) and body-centred cubic (bcc) materials have been calclllated from the quasiharmonic (QH) theory, the lowest order (A2 ) perturbation theory (PT) and a recently proposed Green's function (GF) method by Shukla and Hiibschle. The latter method includes certain anharmonic effects to all orders of anharmonicity. In order to determine the effect of the range of the interatomic interaction upon the anharmonic contributions to the MSD we have carried out our calculations for a Lennard-Jones (L-J) solid in the nearest-neighbour (NN) and next-nearest neighbour (NNN) approximations. These results can be presented in dimensionless units but if the NN and NNN results are to be compared with each other they must be converted to that of a real solid. When this is done for Xe, the QH MSD for the NN and NNN approximations are found to differ from each other by about 2%. For the A2 and GF results this difference amounts to 8% and 7% respectively. For the NN case we have also compared our PT results, which have been calculated exactly, with PT results calculated using a frequency-shift approximation. We conclude that this frequency-shift approximation is a poor approximation. We have calculated the MSD of five alkali metals, five bcc transition metals and seven fcc transition metals. The model potentials we have used include the Morse, modified Morse, and Rydberg potentials. In general the results obtained from the Green's function method are in the best agreement with experiment. However, this improvement is mostly qualitative and the values of MSD calculated from the Green's function method are not in much better agreement with the experimental data than those calculated from the QH theory. We have calculated the phonon dispersion curves (PDC's) of Na and Cu, using the 4 parameter modified Morse potential. In the case of Na, our results for the PDC's are in poor agreement with experiment. In the case of eu, the agreement between the tlleory and experiment is much better and in addition the results for the PDC's calclliated from the GF method are in better agreement with experiment that those obtained from the QH theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anharmonic contributions of order A6 to the Helmholtz free energy for a crystal in which every atom is on a site of inversion symmetry, have been evaluated The cor~esponding diagrams in the various orders of the perturbation theory have been presented The validity of the expressions given is for high temperatures. Numerical calculations for the diagrams which contribute to the free energy have been worked out for a nearest-n~ighbour central-force model of a facecentered cubic lattice in the high-temperature limit and in the leading term and the Ludwig approximations. The accuracy of the Ludwig approximation in evaluating the Brillouin-zone sums has been investigated. Expansion for all diagrams in the high-temperature limit has been carried out The contribution to the specific heat involves a linear as well as cubic term~ We have applied Lennard-Jones, Morse and Exponential 6 types of potentials. A comparison between the contribution to the free energy of order A6 to that of order A4 has been made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The prediction of proteins' conformation helps to understand their exhibited functions, allows for modeling and allows for the possible synthesis of the studied protein. Our research is focused on a sub-problem of protein folding known as side-chain packing. Its computational complexity has been proven to be NP-Hard. The motivation behind our study is to offer the scientific community a means to obtain faster conformation approximations for small to large proteins over currently available methods. As the size of proteins increases, current techniques become unusable due to the exponential nature of the problem. We investigated the capabilities of a hybrid genetic algorithm / simulated annealing technique to predict the low-energy conformational states of various sized proteins and to generate statistical distributions of the studied proteins' molecular ensemble for pKa predictions. Our algorithm produced errors to experimental results within .acceptable margins and offered considerable speed up depending on the protein and on the rotameric states' resolution used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.