984 resultados para parallel modeling
Resumo:
We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.
Resumo:
A detailed mechanics based model is developed to analyze the problem of structural instability in slender aerospace vehicles. Coupling among the rigid-body modes, the longitudinal vibrational modes and the transverse vibrational modes due to asymmetric lifting-body cross-section are considered. The model also incorporates the effects of aerodynamic pressure and the propulsive thrust of the vehicle. The model is one-dimensional, and it can be employed to idealized slender vehicles with complex shapes. Condition under which a flexible body with internal stress waves behaves like a perfect rigid body is derived. Two methods are developed for finite element discretization of the system: (1) A time-frequency Fourier spectral finite element method and (2) h-p finite element method. Numerical results using the above methods are presented in Part II of this paper. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
Financial time series tend to behave in a manner that is not directly drawn from a normal distribution. Asymmetries and nonlinearities are usually seen and these characteristics need to be taken into account. To make forecasts and predictions of future return and risk is rather complicated. The existing models for predicting risk are of help to a certain degree, but the complexity in financial time series data makes it difficult. The introduction of nonlinearities and asymmetries for the purpose of better models and forecasts regarding both mean and variance is supported by the essays in this dissertation. Linear and nonlinear models are consequently introduced in this dissertation. The advantages of nonlinear models are that they can take into account asymmetries. Asymmetric patterns usually mean that large negative returns appear more often than positive returns of the same magnitude. This goes hand in hand with the fact that negative returns are associated with higher risk than in the case where positive returns of the same magnitude are observed. The reason why these models are of high importance lies in the ability to make the best possible estimations and predictions of future returns and for predicting risk.
Resumo:
This paper presents a study of kinematic and force singularities in parallel manipulators and closed-loop mechanisms and their relationship to accessibility and controllability of such manipulators and closed-loop mechanisms, Parallel manipulators and closed-loop mechanisms are classified according to their degrees of freedom, number of output Cartesian variables used to describe their motion and the number of actuated joint inputs. The singularities in the workspace are obtained by considering the force transformation matrix which maps the forces and torques in joint space to output forces and torques ill Cartesian space. The regions in the workspace which violate the small time local controllability (STLC) and small time local accessibility (STLA) condition are obtained by deriving the equations of motion in terms of Cartesian variables and by using techniques from Lie algebra.We show that for fully actuated manipulators when the number ofactuated joint inputs is equal to the number of output Cartesian variables, and the force transformation matrix loses rank, the parallel manipulator does not meet the STLC requirement. For the case where the number of joint inputs is less than the number of output Cartesian variables, if the constraint forces and torques (represented by the Lagrange multipliers) become infinite, the force transformation matrix loses rank. Finally, we show that the singular and non-STLC regions in the workspace of a parallel manipulator and closed-loop mechanism can be reduced by adding redundant joint actuators and links. The results are illustrated with the help of numerical examples where we plot the singular and non-STLC/non-STLA regions of parallel manipulators and closed-loop mechanisms belonging to the above mentioned classes. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The objective of this paper is to investigate and model the characteristics of the prevailing volatility smiles and surfaces on the DAX- and ESX-index options markets. Continuing on the trend of Implied Volatility Functions, the Standardized Log-Moneyness model is introduced and fitted to historical data. The model replaces the constant volatility parameter of the Black & Scholes pricing model with a matrix of volatilities with respect to moneyness and maturity and is tested out-of-sample. Considering the dynamics, the results show support for the hypotheses put forward in this study, implying that the smile increases in magnitude when maturity and ATM volatility decreases and that there is a negative/positive correlation between a change in the underlying asset/time to maturity and implied ATM volatility. Further, the Standardized Log-Moneyness model indicates an improvement to pricing accuracy compared to previous Implied Volatility Function models, however indicating that the parameters of the models are to be re-estimated continuously for the models to fully capture the changing dynamics of the volatility smiles.
Resumo:
The present work concerns with the static scheduling of jobs to parallel identical batch processors with incompatible job families for minimizing the total weighted tardiness. This scheduling problem is applicable in burn-in operations and wafer fabrication in semiconductor manufacturing. We decompose the problem into two stages: batch formation and batch scheduling, as in the literature. The Ant Colony Optimization (ACO) based algorithm called ATC-BACO algorithm is developed in which ACO is used to solve the batch scheduling problems. Our computational experimentation shows that the proposed ATC-BACO algorithm performs better than the available best traditional dispatching rule called ATC-BATC rule.
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
ingle tract guanine residues can associate to form stable parallel quadruplex structures in the presence of certain cations. Nanosecond scale molecular dynamics simulations have been performed on fully solvated fibre model of parallel d(G(7)) quadruplex structures with Na+ or K+ ions coordinated in the cavity formed by the O6 atoms of the guanine bases. The AMBER 4.1 force field and Particle Mesh Ewald technique for electrostatic interactions have been used in all simulations. There quadruplex structures are stable during the simulation, with the middle four base tetrads showing root mean square deviation values between 0.5 to 0.8 Angstrom from the initial structure as well the high resolution crystal structure. Even in the absence of any coordinated ion in the initial structure, the G-quadruplex structure remains intact throughout the simulation. During the 1.1 ns MD simulation, one Nai counter ion from the solvent as well as several water molecules enter the central cavity to occupy the empty coordination sites within the parallel quadruplex and help stabilize the structure. Hydrogen bonding pattern depends on the nature of the coordinated ion, with the G-tetrad undergoing local structural variation to accommodate cations of different sizes. in the absence of any coordinated ion. due to strong mutual repulsion, O6 atoms within G-tetrad are forced farther apart from each other, which leads to a considerably different hydrogen bonding scheme within the G-tetrads and very favourable interaction energy between the guanine bases constituting a G-tetrad. However, a coordinated ion between G-tetrads provides extra stacking energy for the G-tetrads and makes the quadruplex structure more rigid. Na+ ions, within the quadruplex cavity, are more mobile than coordinated K+ ions. A number of hydrogen bonded water molecules are observed within the grooves of all quadruplex structures.
Resumo:
The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.
Resumo:
The copper(II) complex [Cu(salgly) (bpy)] . 4H(2)O (1), where salgly is a tridentate glycinatosalicylaldimine Schiffbase Ligand, is prepared and structurally characterized. The complex is found to be catalytically active in the oxidation of ascorbic acid by dioxygen and the process is also effective in the presence of benzylamine giving benzaldehyde as a product, thus modeling the activity of the Cu-B site of dopamine beta-hydroxylase. (C) 2000 Elsevier Science S.A. All rights reserved.