21 resultados para jumps

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent literature has focused on realized volatility models to predict financial risk. This paper studies the benefit of explicitly modeling jumps in this class of models for value at risk (VaR) prediction. Several popular realized volatility models are compared in terms of their VaR forecasting performances through a Monte Carlo study and an analysis based on empirical data of eight Chinese stocks. The results suggest that careful modeling of jumps in realized volatility models can largely improve VaR prediction, especially for emerging markets where jumps play a stronger role than those in developed markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the impact of allowing for stochastic volatility and jumps (SVJ) in a structural model on corporate credit risk prediction. The results from a simulation study verify the better performance of the SVJ model compared with the commonly used Merton model, and three sources are provided to explain the superiority. The empirical analysis on two real samples further ascertains the importance of recognizing the stochastic volatility and jumps by showing that the SVJ model decreases bias in spread prediction from the Merton model, and better explains the time variation in actual CDS spreads. The improvements are found particularly apparent in small firms or when the market is turbulent such as the recent financial crisis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much research has investigated the differences between option implied volatilities and econometric model-based forecasts. Implied volatility is a market determined forecast, in contrast to model-based forecasts that employ some degree of smoothing of past volatility to generate forecasts. Implied volatility has the potential to reflect information that a model-based forecast could not. This paper considers two issues relating to the informational content of the S&P 500 VIX implied volatility index. First, whether it subsumes information on how historical jump activity contributed to the price volatility, followed by whether the VIX reflects any incremental information pertaining to future jump activity relative to model-based forecasts. It is found that the VIX index both subsumes information relating to past jump contributions to total volatility and reflects incremental information pertaining to future jump activity. This issue has not been examined previously and expands our understanding of how option markets form their volatility forecasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland Court of Appeal recently heard a case that raised the defence of volenti on fit injuria. By a majority of 2:1 the court held in Leyden v Caboolture Shire Council [2007] QCA 134 (20 April 2007) that the defence of volenti was established and defeated the action in negligence for damages for personal injury. The facts of the case were quite simple. The plaintiff was 15 years old when he was injured at the Bluebell Park which was controlled and managed by the Caboolture Shire Council (the defendant). The park had a BMX track – built and maintained by the defendant. At trial it was held that although the defendant owed a duty of care to entrants, a duty was not owed to the plaintiff. The judge found that the plaintiff was different to other entrants who used facilities provided by a council in a public park. The plaintiff was not relying upon the defendant to provide a BMX track with jumps that were reasonably safe as the evidence was that the track was regularly altered by third parties and the plaintiff knew that. Therefore it was reasoned that the plaintiff was relying upon the ability of the third parties who modified the jump and his own ability to use it, not the ability of the defendant to provide a reasonably safe track (at [10]). The trial judge also held that if a duty was owed, the defence of volenti applied so as to defeat the claim for damages. This was based upon the evidence that the plaintiff knew of the modification of the jump by third parties and knew of the risk. It was held that the plaintiff ‘had the appropriate subjective appreciation of the risk’ (at [11]).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland Department of Main Roads uses Weigh-in-Motion (WiM) devices to covertly monitor (at highway speed) axle mass, axle configurations and speed of heavy vehicles on the road network. Such data is critical for the planning and design of the road network. Some of the data appears excessively variable. The current work considers the nature, magnitude and possible causes of WiM data variability. Over fifty possible causes of variation in WiM data have been identified in the literature. Data exploration has highlighted five basic types of variability specifically: ----- • cycling, both diurnal and annual;----- • consistent but unreasonable data;----- • data jumps;----- • variations between data from opposite sides of the one road; and ----- • non-systematic variations.----- This work is part of wider research into procedures to eliminate or mitigate the influence of WiM data variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aetiology behind overuse injuries such as stress fractures is complex and multi-factorial. In sporting events where the loading is likely to be uneven (e.g. hurdling and jumps), research has suggested that the frequency of stress fractures seems to favour the athlete’s dominant limb. The tendency for an individual to have a preferred limb for voluntary motor acts makes limb selection a possible factor behind the development of unilateral overuse injuries, particularly when repeatedly used during high loading activities. The event of sprint hurdling is well suited for the study of loading asymmetry as the hurdling technique is repetitive and the limb movement asymmetrical. Of relevance to this study is the high incidence of Navicular Stress Fractures (NSF) in hurdlers, with suggestions there is a tendency for the fracture to develop in the trail leg foot, although this is not fully accepted. The Ground Reaction Force (GRF) with each foot contact is influenced by the hurdle action, with research finding step-to-step loading variations. However, it is unknown if this loading asymmetry extends to individual forefoot joints, thereby influencing stress fracture development. The first part of the study involved a series of investigations using a commercially available matrix style in-shoe sensor system (FscanTM, Tekscan Inc.). The suitability of insole sensor systems and custom made discrete sensors for use in hurdling-related training activities was assessed. The methodology used to analyse foot loading with each technology was investigated. The insole and discrete sensors systems tested proved to be unsuitable for use during full pace hurdling. Instead, a running barrier task designed to replicate the four repetitive foot contacts present during hurdling was assessed. This involved the clearance of a series of 6 barriers (low training hurdles), place in a straight line, using 4 strides between each. The second part of the study involved the analysis of "inter-limb" and "within foot loading asymmetries" using stance duration as well as vertical GRF under the Hallux (T1), the first metatarsal head (M1) and the central forefoot peak pressure site (M2), during walking, running, and running with barrier clearances. The contribution to loading asymmetry that each of the four repetitive foot contacts made during a series of barrier clearances was also assessed. Inter-limb asymmetry, in forefoot loading, occurred at discrete forefoot sites in a non-uniform manner across the three gait conditions. When the individual barrier foot contacts were compared, the stance duration was asymmetrical and the proportion of total forefoot load at M2 was asymmetrical. There were no significant differences between the proportion of forefoot load at M1, compared to M2; for any of the steps involved in the barrier clearance. A case study testing experimental (discrete) sensors during full pace sprinting and hurdling found that during both gait conditions, the trail limb experienced the greater vertical GRF at M1 and M2. During full pace hurdling, increased stance duration and vertical loading was a characteristic of the trail limb hurdle foot contacts. Commercially available in-shoe systems are not suitable for on field assessment of full pace hurdling. For the use of discrete sensor technology to become commonplace in the field, more robust sensors need to be developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: The current study investigated the change in neuromuscular contractile properties following competitive rugby league matches and the relationship with physical match demands. Design: Eleven trained, male rugby league players participated in 2–3 amateur, competitive matches (n = 30). Methods: Prior to, immediately (within 15-min) and 2 h post-match, players performed repeated counter-movement jumps (CMJ) followed by isometric tests on the right knee extensors for maximal voluntary contraction (MVC), voluntary activation (VA) and evoked twitch contractile properties of peak twitch force (Pt), rate of torque development (RTD), contraction duration (CD) and relaxation rate (RR). During each match, players wore 1 Hz Global Positioning Satellite devices to record distance and speeds of matches. Further, matches were filmed and underwent notational analysis for number of total body collisions. Results: Total, high-intensity, very-high intensity distances covered and mean speed were 5585 ± 1078 m, 661 ± 265, 216 ± 121 m and 75 ± 14 m min−1, respectively. MVC was significantly reduced immediately and 2 h post-match by 8 ± 11 and 12 ± 13% from pre-match (p < 0.05). Moreover, twitch contractile properties indicated a suppression of Pt, RTD and RR immediately post-match (p < 0.05). However, VA was not significantly altered from pre-match (90 ± 9%), immediately-post (89 ± 9%) or 2 h post (89 ± 8%), (p > 0.05). Correlation analyses indicated that total playing time (r = −0.50) and mean speed (r = −0.40) were moderately associated to the change in post-match MVC, while mean speed (r = 0.35) was moderately associated to VA. Conclusions: The present study highlights the physical demands of competitive amateur rugby league result in interruption of peripheral contractile function, and post-match voluntary torque suppression may be associated with match playing time and mean speeds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transport processes within heterogeneous media may exhibit non-classical diffusion or dispersion; that is, not adequately described by the classical theory of Brownian motion and Fick's law. We consider a space fractional advection-dispersion equation based on a fractional Fick's law. The equation involves the Riemann-Liouville fractional derivative which arises from assuming that particles may make large jumps. Finite difference methods for solving this equation have been proposed by Meerschaert and Tadjeran. In the variable coefficient case, the product rule is first applied, and then the Riemann-Liouville fractional derivatives are discretised using standard and shifted Grunwald formulas, depending on the fractional order. In this work, we consider a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Grunwald formulas are used to discretise the fractional derivatives at control volume faces. We compare the two methods for several case studies from the literature, highlighting the convenience of the finite volume approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The price formation of financial assets is a complex process. It extends beyond the standard economic paradigm of supply and demand to the understanding of the dynamic behavior of price variability, the price impact of information, and the implications of trading behavior of market participants on prices. In this thesis, I study aggregate market and individual assets volatility, liquidity dimensions, and causes of mispricing for US equities over a recent sample period. How volatility forecasts are modeled, what determines intradaily jumps and causes changes in intradaily volatility and what drives the premium of traded equity indexes? Are they induced, for example, by the information content of lagged volatility and return parameters or by macroeconomic news, changes in liquidity and volatility? Besides satisfying our intellectual curiosity, answers to these questions are of direct importance to investors developing trading strategies, policy makers evaluating macroeconomic policies and to arbitrageurs exploiting mispricing in exchange-traded funds. Results show that the leverage effect and lagged absolute returns improve forecasts of continuous components of daily realized volatility as well as jumps. Implied volatility does not subsume the information content of lagged returns in forecasting realized volatility and its components. The reported results are linked to the heterogeneous market hypothesis and demonstrate the validity of extending the hypothesis to returns. Depth shocks, signed order flow, the number of trades, and resiliency are the most important determinants of intradaily volatility. In contrast, spread shock and resiliency are predictive of signed intradaily jumps. There are fewer macroeconomic news announcement surprises that cause extreme price movements or jumps than those that elevate intradaily volatility. Finally, the premium of exchange-traded funds is significantly associated with momentum in net asset value and a number of liquidity parameters including the spread, traded volume, and illiquidity. The mispricing of industry exchange traded funds suggest that limits to arbitrage are driven by potential illiquidity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Utilising quantitative and qualitative research methods the thesis explored how movement patterns were coordinated under different conditions in elite athletes. Results revealed each elite athlete's ability to use multiple, varied information sources to guide successful task performance, highlighting the specific role of surrounding objects in the performance environment to perceptually guide behaviour. Combining elite coaching knowledge with empirical research enhanced understanding of the role of vision in regulating interceptive behaviours, enhancing the representative design of training environments. The main findings have been applied to training design of the Athletics Australia National Jumps Centre at the Queensland Academy of Sport in preparation for the World Indoor Championships, World Championships, and Olympic Games for Australian long and triple jumpers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss algorithms for combining sequential prediction strategies, a task which can be viewed as a natural generalisation of the concept of universal coding. We describe a graphical language based on Hidden Markov Models for defining prediction strategies, and we provide both existing and new models as examples. The models include efficient, parameterless models for switching between the input strategies over time, including a model for the case where switches tend to occur in clusters, and finally a new model for the scenario where the prediction strategies have a known relationship, and where jumps are typically between strongly related ones. This last model is relevant for coding time series data where parameter drift is expected. As theoretical contributions we introduce an interpolation construction that is useful in the development and analysis of new algorithms, and we establish a new sophisticated lemma for analysing the individual sequence regret of parameterised models.