454 resultados para common stochastic component

em Queensland University of Technology - ePrints Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable budget/cost estimates for road maintenance and rehabilitation are subjected to uncertainties and variability in road asset condition and characteristics of road users. The CRC CI research project 2003-029-C ‘Maintenance Cost Prediction for Road’ developed a method for assessing variation and reliability in budget/cost estimates for road maintenance and rehabilitation. The method is based on probability-based reliable theory and statistical method. The next stage of the current project is to apply the developed method to predict maintenance/rehabilitation budgets/costs of large networks for strategic investment. The first task is to assess the variability of road data. This report presents initial results of the analysis in assessing the variability of road data. A case study of the analysis for dry non reactive soil is presented to demonstrate the concept in analysing the variability of road data for large road networks. In assessing the variability of road data, large road networks were categorised into categories with common characteristics according to soil and climatic conditions, pavement conditions, pavement types, surface types and annual average daily traffic. The probability distributions, statistical means, and standard deviation values of asset conditions and annual average daily traffic for each type were quantified. The probability distributions and the statistical information obtained in this analysis will be used to asset the variation and reliability in budget/cost estimates in later stage. Generally, we usually used mean values of asset data of each category as input values for investment analysis. The variability of asset data in each category is not taken into account. This analysis method demonstrated that it can be used for practical application taking into account the variability of road data in analysing large road networks for maintenance/rehabilitation investment analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the common factor structure of US, German, and Japanese Government bond returns. Unlike previous studies, we formally take into account the presence of country-specific factors when estimating common factors. We show that the classical approach of running a principal component analysis on a multi-country dataset of bond returns captures both local and common influences and therefore tends to pick too many factors. We conclude that US bond returns share only one common factor with German and Japanese bond returns. This single common factor is associated most notably with changes in the level of domestic term structures. We show that accounting for country-specific factors improves the performance of domestic and international hedging strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of femoral cement-in-cement revision is well established, but there are no previous series reporting its use on the acetabular side at the time of revision total hip arthroplasty. We describe the surgical technique and report the outcome of 60 consecutive cement-in-cement revisions of the acetabular component at a mean follow-up of 8.5 years (range 5-12 years). All had a radiologically and clinically well fixed acetabular cement mantle at the time of revision. 29 patients died. No case was lost to follow-up. The 2 most common indications for acetabular revision were recurrent dislocation (77%) and to compliment a femoral revision (20%). There were 2 cases of aseptic cup loosening (3.3%) requiring re-revision. No other hip was clinically or radiologically loose (96.7%) at latest follow-up. One case was re-revised for infection, 4 for recurrent dislocation and 1 for disarticulation of a constrained component. At 5 years, the Kaplan-Meier survival rate was 100% for aseptic loosening and 92.2% (95% CI; 84.8-99.6%) with revision for all causes as the endpoint. These results support the use of the cement-in-cement revision technique in appropriate cases on the acetabular side. Theoretical advantages include preservation of bone stock, reduced operating time, reduced risk of complications and durable fixation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pioneering work of Runge and Kutta a hundred years ago has ultimately led to suites of sophisticated numerical methods suitable for solving complex systems of deterministic ordinary differential equations. However, in many modelling situations, the appropriate representation is a stochastic differential equation and here numerical methods are much less sophisticated. In this paper a very general class of stochastic Runge-Kutta methods is presented and much more efficient classes of explicit methods than previous extant methods are constructed. In particular, a method of strong order 2 with a deterministic component based on the classical Runge-Kutta method is constructed and some numerical results are presented to demonstrate the efficacy of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Migraine is a prevalent neurovascular disease with a significant genetic component. Linkage studies have so far identified migraine susceptibility loci on chromosomes 1, 4, 6, 11, 14, 19 and X. We performed a genome-wide scan of 92 Australian pedigrees phenotyped for migraine with and without aura and for a more heritable form of “severe” migraine. Multipoint non-parametric linkage analysis revealed suggestive linkage on chromosome 18p11 for the severe migraine phenotype (LOD*=2.32, P=0.0006) and chromosome 3q (LOD*=2.28, P=0.0006). Excess allele sharing was also observed at multiple different chromosomal regions, some of which overlap with, or are directly adjacent to, previously implicated migraine susceptibility regions. We have provided evidence for two loci involved in severe migraine susceptibility and conclude that dissection of the “migraine” phenotype may be helpful for identifying susceptibility genes that influence the more heritable clinical (symptom) profiles in affected pedigrees. Also, we concluded that the genetic aetiology of the common (International Headache Society) forms of the disease is probably comprised of a number of low to moderate effect susceptibility genes, perhaps acting synergistically, and this effect is not easily detected by traditional single-locus linkage analyses of large samples of affected pedigrees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Migraine is a common complex disorder that shows strong familial aggregation. There is a general increased prevalence of migraine in females compared with males, with recent studies indicating that migraine affects 18% of females compared with 6% of males. This preponderance of females among migraine sufferers coupled with evidence of an increased risk of migraine in first degree relatives of male probands but not in relatives of female probands suggests the possibility of an X-linked dominant gene. We report here the localization of a typical migraine susceptibility locus to the X chromosome. Of three large multigenerational migraine pedigrees two families showed significant excess allele sharing to Xq markers (P = 0.031 and P = 0.012). Overall analysis of data from all three pedigrees gave significant evidence in support of linkage and heterogeneity (HLOD = 3.1). These findings provide conclusive evidence that familial typical migraine is a heterogeneous disorder. We suggest that the localization of a migraine susceptibility locus to the X chromosome could in part explain the increased risk of migraine in relatives of male probands and may be involved in the increased female prevalence of this disorder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the aetiology of patterns of variation within and covariation across brain regions is key to advancing our understanding of the functional, anatomical and developmental networks of the brain. Here we applied multivariate twin modelling and principal component analysis (PCA) to investigate the genetic architecture of the size of seven subcortical regions (caudate nucleus, thalamus, putamen, pallidum, hippocampus, amygdala and nucleus accumbens) in a genetically informative sample of adolescents and young adults (N=1038; mean age=21.6±3.2years; including 148 monozygotic and 202 dizygotic twin pairs) from the Queensland Twin IMaging (QTIM) study. Our multivariate twin modelling identified a common genetic factor that accounts for all the heritability of intracranial volume (0.88) and a substantial proportion of the heritability of all subcortical structures, particularly those of the thalamus (0.71 out of 0.88), pallidum (0.52 out of 0.75) and putamen (0.43 out of 0.89). In addition, we also found substantial region-specific genetic contributions to the heritability of the hippocampus (0.39 out of 0.79), caudate nucleus (0.46 out of 0.78), amygdala (0.25 out of 0.45) and nucleus accumbens (0.28 out of 0.52). This provides further insight into the extent and organization of subcortical genetic architecture, which includes developmental and general growth pathways, as well as the functional specialization and maturation trajectories that influence each subcortical region. This multivariate twin study identifies a common genetic factor that accounts for all the heritability of intracranial volume (0.88) and a substantial proportion of the heritability of all subcortical structures, particularly those of the thalamus (0.71 out of 0.88), pallidum (0.52 out of 0.75) and putamen (0.43 out of 0.89). In parallel, it also describes substantial region-specific genetic contributions to the heritability of the hippocampus (0.39 out of 0.79), caudate nucleus (0.46 out of 0.78), amygdala (0.25 out of 0.45) and nucleus accumbens (0.28 out of 0.52).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since we still know very little about stem cells in their natural environment, it is useful to explore their dynamics through modelling and simulation, as well as experimentally. Most models of stem cell systems are based on deterministic differential equations that ignore the natural heterogeneity of stem cell populations. This is not appropriate at the level of individual cells and niches, when randomness is more likely to affect dynamics. In this paper, we introduce a fast stochastic method for simulating a metapopulation of stem cell niche lineages, that is, many sub-populations that together form a heterogeneous metapopulation, over time. By selecting the common limiting timestep, our method ensures that the entire metapopulation is simulated synchronously. This is important, as it allows us to introduce interactions between separate niche lineages, which would otherwise be impossible. We expand our method to enable the coupling of many lineages into niche groups, where differentiated cells are pooled within each niche group. Using this method, we explore the dynamics of the haematopoietic system from a demand control system perspective. We find that coupling together niche lineages allows the organism to regulate blood cell numbers as closely as possible to the homeostatic optimum. Furthermore, coupled lineages respond better than uncoupled ones to random perturbations, here the loss of some myeloid cells. This could imply that it is advantageous for an organism to connect together its niche lineages into groups. Our results suggest that a potential fruitful empirical direction will be to understand how stem cell descendants communicate with the niche and how cancer may arise as a result of a failure of such communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of socket prostheses Currently, for individuals with limb loss, the conventional method of attaching a prosthetic limb relies on a socket that fits over the residual limb. However, there are a number of issues concerning the use of a socket (e.g., blisters, irritation, and discomfort) that result in dissatisfaction with socket prostheses, and these lead ultimately a significant decrease in quality of life. Bone-anchored prosthesis Alternatively, the concept of attaching artificial limbs directly to the skeletal system has been developed (bone anchored prostheses), as it alleviates many of the issues surrounding the conventional socket interface.Bone anchored prostheses rely on two critical components: the implant, and the percutaneous abutment or adapter, which forms the connection for the external prosthetic system (Figure 1). To date, an implant that screws into the long bone of the residual limb has been the most common intervention. However, more recently, press-fit implants have been introduced and their use is increasing. Several other devices are currently at various stages of development, particularly in Europe and the United States. Benefits of bone-anchored prostheses Several key studies have demonstrated that bone-anchored prostheses have major clinical benefits when compared to socket prostheses (e.g., quality of life, prosthetic use, body image, hip range of motion, sitting comfort, ease of donning and doffing, osseoperception (proprioception), walking ability) and acceptable safety, in terms of implant stability and infection. Additionally, this method of attachment allows amputees to participate in a wide range of daily activities for a substantially longer duration. Overall, the system has demonstrated a significant enhancement to quality of life. Challenges of direct skeletal attachment However, due to the direct skeletal attachment, serious injury and damage can occur through excessive loading events such as during a fall (e.g., component damage, peri-prosthetic fracture, hip dislocation, and femoral head fracture). These incidents are costly (e.g., replacement of components) and could require further surgical interventions. Currently, these risks are limiting the acceptance of bone-anchored technology and the substantial improvement to quality of life that this treatment offers. An in-depth investigation into these risks highlighted a clear need to re-design and improve the componentry in the system (Figure 2), to improve the overall safety during excessive loading events. Aim and purposes The ultimate aim of this doctoral research is to improve the loading safety of bone-anchored prostheses, to reduce the incidence of injury and damage through the design of load restricting components, enabling individuals fitted with the system to partake in everyday activities, with increased security and self-assurance. The safety component will be designed to release or ‘fail’ external to the limb, in a way that protects the internal bone-implant interface, thus removing the need for restorative surgery and potential damage to the bone. This requires detailed knowledge of the loads typically experienced by the limb and an understanding of potential overload situations that might occur. Hence, a comprehensive review of the loading literature surrounding bone anchored prostheses will be conducted as part of this project, with the potential for additional experimental studies of the loads during normal activities to fill in gaps in the literature. This information will be pivotal in determining the specifications for the properties of the safety component, and the bone-implant system. The project will follow the Stanford Biodesign process for the development of the safety component.