991 resultados para Prediction theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A microscopic theory of equilibrium solvation and solvation dynamics of a classical, polar, solute molecule in dipolar solvent is presented. Density functional theory is used to explicitly calculate the polarization structure around a solvated ion. The calculated solvent polarization structure is different from the continuum model prediction in several respects. The value of the polarization at the surface of the ion is less than the continuum value. The solvent polarization also exhibits small oscillations in space near the ion. We show that, under certain approximations, our linear equilibrium theory reduces to the nonlocal electrostatic theory, with the dielectric function (c(k)) of the liquid now wave vector (k) dependent. It is further shown that the nonlocal electrostatic estimate of solvation energy, with a microscopic c(k), is close to the estimate of linearized equilibrium theories of polar liquids. The study of solvation dynamics is based on a generalized Smoluchowski equation with a mean-field force term to take into account the effects of intermolecular interactions. This study incorporates the local distortion of the solvent structure near the ion and also the effects of the translational modes of the solvent molecules.The latter contribution, if significant, can considerably accelerate the relaxation of solvent polarization and can even give rise to a long time decay that agrees with the continuum model prediction. The significance of these results is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A molecular theory of dielectric relaxation in a dense binary dipolar liquid is presented. The theory takes into account the effects of intra- and interspecies intermolecular interactions. It is shown that the relaxation is, in general, nonexponential. In certain limits, we recover the biexponential form traditionally used to analyze the experimental data of dielectric relaxation in a binary mixture. However, the relaxation times are widely different from the prediction of the noninteracting rotational diffusion model of Debye for a binary system. Detailed numerical evaluation of the frequency-dependent dielectric function epsilon-(omega) is carried out by using the known analytic solution of the mean spherical approximation (MSA) model for the two-particle direct correlation function for a polar mixture. A microscopic expression for both wave vector (k) and frequency (omega) dependent dielectric function, epsilon-(k,omega), of a binary mixture is also presented. The theoretical predictions on epsilon-(omega) (= epsilon-(k = 0, omega)) have been compared with the available experimental results. In particular, the present theory offers a molecular explanation of the phenomenon of fusing of the two relaxation channels of the neat liquids, observed by Schallamach many years ago.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently three different experimental studies on ultrafast solvation dynamics in monohydroxy straight-chain alcohols (C-1-C-4) have been carried out, with an aim to quantify the time constant (and the amplitude) of the ultrafast component. The results reported are, however, rather different from different experiments. In order to understand the reason for these differences, we have carried out a detailed theoretical study to investigate the time dependent progress of solvation of both an ionic and a dipolar solute probe in these alcohols. For methanol, the agreement between the theoretical predictions and the experimental results [Bingemann and Ernsting J. Chem. Phys. 1995, 102, 2691 and Horng et al. J: Phys, Chern, 1995, 99, 17311] is excellent. For ethanol, propanol, and butanol, we find no ultrafast component of the time constant of 70 fs or so. For these three liquids, the theoretical results are in almost complete agreement with the experimental results of Horng et al. For ethanol and propanol, the theoretical prediction for ionic solvation is not significantly different from that of dipolar solvation. Thus, the theory suggests that the experiments of Bingemann and Ernsting and those of Horng et al. studied essentially the polar solvation dynamics. The theoretical studies also suggest that the experimental investigations of Joo et al. which report a much faster and larger ultrafast component in the same series of solvents (J. Chem. Phys. 1996, 104, 6089) might have been more sensitive to the nonpolar part of solvation dynamics than the polar part. In addition, a discussion on the validity of the present theoretical approach is presented. In this theory the ultrafast component arises from almost frictionless inertial motion of the individual solvent molecules in the force field of its neighbors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work focuses on the formulation of an asymptotically correct theory for symmetric composite honeycomb sandwich plate structures. In these panels, transverse stresses tremendously influence design. The conventional 2-D finite elements cannot predict the thickness-wise distributions of transverse shear or normal stresses and 3-D displacements. Unfortunately, the use of the more accurate three-dimensional finite elements is computationally prohibitive. The development of the present theory is based on the Variational Asymptotic Method (VAM). Its unique features are the identification and utilization of additional small parameters associated with the anisotropy and non-homogeneity of composite sandwich plate structures. These parameters are ratios of smallness of the thickness of both facial layers to that of the core and smallness of 3-D stiffness coefficients of the core to that of the face sheets. Finally, anisotropy in the core and face sheets is addressed by the small parameters within the 3-D stiffness matrices. Numerical results are illustrated for several sample problems. The 3-D responses recovered using VAM-based model are obtained in a much more computationally efficient manner than, and are in agreement with, those of available 3-D elasticity solutions and 3-D FE solutions of MSC NASTRAN. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several time dependent fluorescence Stokes shift (TDFSS) experiments have reported a slow power law decay in the hydration dynamics of a DNA molecule. Such a power law has neither been observed in computer simulations nor in some other TDFSS experiments. Here we observe that a slow decay may originate from collective ion contribution because in experiments DNA is immersed in a buffer solution, and also from groove bound water and lastly from DNA dynamics itself. In this work we first express the solvation time correlation function in terms of dynamic structure factors of the solution. We use mode coupling theory to calculate analytically the time dependence of collective ionic contribution. A power law decay in seen to originate from an interplay between long-range probe-ion direct correlation function and ion-ion dynamic structure factor. Although the power law decay is reminiscent of Debye-Falkenhagen effect, yet solvation dynamics is dominated by ion atmosphere relaxation times at longer length scales (small wave number) than in electrolyte friction. We further discuss why this power law may not originate from water motions which have been computed by molecular dynamics simulations. Finally, we propose several experiments to check the prediction of the present theoretical work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a framework for obtaining reliable solid-state charge and optical excitations and spectra from optimally tuned range-separated hybrid density functional theory. The approach, which is fully couched within the formal framework of generalized Kohn-Sham theory, allows for the accurate prediction of exciton binding energies. We demonstrate our approach through first principles calculations of one- and two-particle excitations in pentacene, a molecular semiconducting crystal, where our work is in excellent agreement with experiments and prior computations. We further show that with one adjustable parameter, set to produce the known band gap, this method accurately predicts band structures and optical spectra of silicon and lithium fluoride, prototypical covalent and ionic solids. Our findings indicate that for a broad range of extended bulk systems, this method may provide a computationally inexpensive alternative to many-body perturbation theory, opening the door to studies of materials of increasing size and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronically nonadiabatic decomposition pathways of guanidium triazolate are explored theoretically. Nonadiabatically coupled potential energy surfaces are explored at the complete active space self-consistent field (CASSCF) level of theory. For better estimation of energies complete active space second order perturbation theories (CASPT2 and CASMP2) are also employed. Density functional theory (DFT) with B3LYP functional and MP2 level of theory are used to explore subsequent ground state decomposition pathways. In comparison with all possible stable decomposition products (such as, N-2, NH3, HNC, HCN, NH2CN and CH3NC), only NH3 (with NH2CN) and N-2 are predicted to be energetically most accessible initial decomposition products. Furthermore, different conical intersections between the S-1 and S-0 surfaces, which are computed at the CASSCF(14,10)/6-31G(d) level of theory, are found to play an essential role in the excited state deactivation process of guanidium triazolate. This is the first report on the electronically nonadiabatic decomposition mechanisms of isolated guanidium triazolate salt. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present research, the discrete dislocation theory is used to analyze the size effect phenomena for the MEMS devices undergoing micro-bending load. A consistent result with the experimental one in literature is obtained. In order to check the effectiveness to use the discrete dislocation theory in predicting the size effect, both the basic version theory and the updated one are adopted simultaneously. The normalized stress-strain relations of the material are obtained for different plate thickness or for different obstacle density. The prediction results are compared with experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The LURR theory is a new approach for earthquake prediction, which achieves good results in earthquake prediction within the China mainland and regions in America, Japan and Australia. However, the expansion of the prediction region leads to the refinement of its longitude and latitude, and the increase of the time period. This requires increasingly more computations, and the volume of data reaches the order of GB, which will be very difficult for a single CPU. In this paper, a new method was introduced to solve this problem. Adopting the technology of domain decomposition and parallelizing using MPI, we developed a new parallel tempo-spatial scanning program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.

Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.

Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.

Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.

Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pressure oscillation within combustion chambers of aeroengines and industrial gas turbines is a major technical challenge to the development of high-performance and low-emission propulsion systems. In this paper, an approach integrating computational fluid dynamics and one-dimensional linear stability analysis is developed to predict the modes of oscillation in a combustor and their frequencies and growth rates. Linear acoustic theory was used to describe the acoustic waves propagating upstream and downstream of the combustion zone, which enables the computational fluid dynamics calculation to be efficiently concentrated on the combustion zone. A combustion oscillation was found to occur with its predicted frequency in agreement with experimental measurements. Furthermore, results from the computational fluid dynamics calculation provide the flame transfer function to describe unsteady heat release rate. Departures from ideal one-dimensional flows are described by shape factors. Combined with this information, low-order models can work out the possible oscillation modes and their initial growth rates. The approach developed here can be used in more general situations for the analysis of combustion oscillations. Copyright © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major research program was carried out to analyze the mechanism of FRP debonding from concrete beams using global-energy-balance approach (GEBA). The key findings are that the fracture process zone is small so there is no R-curve to consider, failure is dominated by Mode I behavior, and the theory agrees well with tests. The analyses developed in the study provide an essential tool that will enable fracture mechanics to be used to determine the load at which FRP plates will debond from concrete beams. This obviates the need for finite element (FE) analyses in situations where reliable details of the interface geometry and crack-tip stress fields are not attainable for an accurate analysis. This paper presents an overview of the GEBA analyses that is described in detail elsewhere, and explains the slightly unconventional assumptions made in the analyses, such as the revised moment-curvature model, the location of an effective centroid, the separate consideration of the FRP and the RC beam for the purposes of the analysis, the use of Mode I fracture energies and the absence of an R-curve in the fracture mechanics analysis.