194 resultados para Nuisance parameter


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An element spacing of less than half a wavelength introduces strong mutual coupling between the ports of compact antenna arrays. The strong coupling causes significant system performance degradation. A decoupling network may compensate for the mutual coupling. Alternatively, port decoupling can be achieved using a modal feed network. In response to an input signal at one of the input ports, this feed network excites the antenna elements in accordance with one of the eigenvectors of the array scattering parameter matrix. In this paper, a novel 4-element monopole array is described. The feed network of the array is implemented as a planar ring-type circuit in stripline with four coupled line sections. The new configuration offers a significant reduction in size, resulting in a very compact array.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the problem of appropriate load sharing in an autonomous microgrid. High gain angle droop control ensures proper load sharing, especially under weak system conditions. However it has a negative impact on overall stability. Frequency domain modeling, eigenvalue analysis and time domain simulations are used to demonstrate this conflict. A supplementary loop is proposed around a conventional droop control of each DG converter to stabilize the system while using high angle droop gains. Control loops are based on local power measurement and modulation of the d-axis voltage reference of each converter. Coordinated design of supplementary control loops for each DG is formulated as a parameter optimization problem and solved using an evolutionary technique. The sup-plementary droop control loop is shown to stabilize the system for a range of operating conditions while ensuring satisfactory load sharing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the ‘Melting Rate Triangle’, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an ‘Extended Fire Triangle’ as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a ‘Melting Rate Triangle’ provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic and controlled rate thermal analysis (CRTA) has been used to characterise alunites of formula [M(Al)3(SO4)2(OH)6 ] where M+ is the cations K+, Na+ or NH4+. Thermal decomposition occurs in a series of steps. (a) dehydration, (b) well defined dehydroxylation and (c) desulphation. CRTA offers a better resolution and a more detailed interpretation of water formation processes via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of water formation reveal the subtle nature of dehydration and dehydroxylation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image-based visual servo (IBVS) is a simple, efficient and robust technique for vision-based control. Although technically a local method in practice it demonstrates almost global convergence. However IBVS performs very poorly for cases that involve large rotations about the optical axis. It is well known that re-parameterizing the problem by using polar, instead of Cartesian coordinates, of feature points overcomes this limitation. First, simulation and experimental results are presented to show the complementarity of these two parameter-izations. We then describe a new hybrid visual servo strategy based on combining polar and Cartesian image Jacobians. © 2009 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To navigate successfully in a novel environment a robot needs to be able to Simultaneously Localize And Map (SLAM) its surroundings. The most successful solutions to this problem so far have involved probabilistic algorithms, but there has been much promising work involving systems based on the workings of part of the rodent brain known as the hippocampus. In this paper we present a biologically plausible system called RatSLAM that uses competitive attractor networks to carry out SLAM in a probabilistic manner. The system can effectively perform parameter self-calibration and SLAM in one dimension. Tests in two dimensional environments revealed the inability of the RatSLAM system to maintain multiple pose hypotheses in the face of ambiguous visual input. These results support recent rat experimentation that suggest current competitive attractor models are not a complete solution to the hippocampal modelling problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When asymptotic series methods are applied in order to solve problems that arise in applied mathematics in the limit that some parameter becomes small, they are unable to demonstrate behaviour that occurs on a scale that is exponentially small compared to the algebraic terms of the asymptotic series. There are many examples of physical systems where behaviour on this scale has important effects and, as such, a range of techniques known as exponential asymptotic techniques were developed that may be used to examinine behaviour on this exponentially small scale. Many problems in applied mathematics may be represented by behaviour within the complex plane, which may subsequently be examined using asymptotic methods. These problems frequently demonstrate behaviour known as Stokes phenomenon, which involves the rapid switches of behaviour on an exponentially small scale in the neighbourhood of some curve known as a Stokes line. Exponential asymptotic techniques have been applied in order to obtain an expression for this exponentially small switching behaviour in the solutions to orginary and partial differential equations. The problem of potential flow over a submerged obstacle has been previously considered in this manner by Chapman & Vanden-Broeck (2006). By representing the problem in the complex plane and applying an exponential asymptotic technique, they were able to detect the switching, and subsequent behaviour, of exponentially small waves on the free surface of the flow in the limit of small Froude number, specifically considering the case of flow over a step with one Stokes line present in the complex plane. We consider an extension of this work to flow configurations with multiple Stokes lines, such as flow over an inclined step, or flow over a bump or trench. The resultant expressions are analysed, and demonstrate interesting implications, such as the presence of exponentially sub-subdominant intermediate waves and the possibility of trapped surface waves for flow over a bump or trench. We then consider the effect of multiple Stokes lines in higher order equations, particu- larly investigating the behaviour of higher-order Stokes lines in the solutions to partial differential equations. These higher-order Stokes lines switch off the ordinary Stokes lines themselves, adding a layer of complexity to the overall Stokes structure of the solution. Specifically, we consider the different approaches taken by Howls et al. (2004) and Chap- man & Mortimer (2005) in applying exponential asymptotic techniques to determine the higher-order Stokes phenomenon behaviour in the solution to a particular partial differ- ential equation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common scenario in many pairing-based cryptographic protocols is that one argument in the pairing is fixed as a long term secret key or a constant parameter in the system. In these situations, the runtime of Miller's algorithm can be significantly reduced by storing precomputed values that depend on the fixed argument, prior to the input or existence of the second argument. In light of recent developments in pairing computation, we show that the computation of the Miller loop can be sped up by up to 37 if precomputation is employed, with our method being up to 19.5 faster than the previous precomputation techniques.