77 resultados para Hydrological forecasting.
em Queensland University of Technology - ePrints Archive
Rainfall, Mosquito Density and the Transmission of Ross River Virus: A Time-Series Forecasting Model
Comparison of Regime Switching, Probit and Logit Models in Dating and Forecasting US Business Cycles
Resumo:
The Queensland Department of Public Works (DPW) holds a significant interest in the Brisbane Central Business District (CBD) in controlling approximately 20 percent of the office space within its confines. This comprises a total of 333,903 square metres of space, of which 170,111 square metres is owned and 163,792 square metres is leased from the private sector. The department’s nominal ownership extends to several enduring, landmark buildings as well as several modern office towers. The portfolio includes the oldest building in the CBD, being the former Commissariat Stores building and one of the newest, a 15,000 square metre office tower under construction at 33 Charlotte Street.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create hypovigilance and impair performance towards critical events. Identifying such impairment in monotonous conditions has been a major subject of research, but no research to date has attempted to predict it in real-time. This pilot study aims to show that performance decrements due to monotonous tasks can be predicted through mathematical modelling taking into account sensation seeking levels. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants‟ performance. The framework for prediction developed on this task could be extended to a monotonous driving task. A Hidden Markov Model (HMM) is proposed to predict participants‟ lapses in alertness. Driver‟s vigilance evolution is modelled as a hidden state and is correlated to a surrogate measure: the participant‟s reactions time. This experiment shows that the monotony of the task can lead to an important decline in performance in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
The Upper Roper River is one of the Australia’s unique tropical rivers which have been largely untouched by development. The Upper Roper River catchment comprises the sub-catchments of the Waterhouse River and Roper Creek, the two tributaries of the Roper River. There is a complex geological setting with different aquifer types. In this seasonal system, close interaction between surface water and groundwater contributes to both streamflow and sustaining ecosystems. The interaction is highly variable between seasons. A conceptual hydrogeological model was developed to investigate the different hydrological processes and geochemical parameters, and determine the baseline characteristics of water resources of this pristine catchment. In the catchment, long term average rainfall is around 850 mm and is summer dominant which significantly influences the total hydrological system. The difference between seasons is pronounced, with high rainfall up to 600 mm/month in the wet season, and negligible rainfall in the dry season. Canopy interception significantly reduces the amount of effective rainfall because of the native vegetation cover in the pristine catchment. Evaporation exceeds rainfall the majority of the year. Due to elevated evaporation and high temperature in the tropics, at least 600 mm of annual rainfall is required to generate potential recharge. Analysis of 120 years of rainfall data trend helped define “wet” and “dry periods”: decreasing trend corresponds to dry periods, and increasing trend to wet periods. The period from 1900 to 1970 was considered as Dry period 1, when there were years with no effective rainfall, and if there was, the intensity of rainfall was around 300 mm. The period 1970 – 1985 was identified as the Wet period 2, when positive effective rainfall occurred in almost every year, and the intensity reached up to 700 mm. The period 1985 – 1995 was the Dry period 2, with similar characteristics as Dry period 1. Finally, the last decade was the Wet period 2, with effective rainfall intensity up to 800 mm. This variability in rainfall over decades increased/decreased recharge and discharge, improving/reducing surface water and groundwater quantity and quality in different wet and dry periods. The stream discharge follows the rainfall pattern. In the wet season, the aquifer is replenished, groundwater levels and groundwater discharge are high, and surface runoff is the dominant component of streamflow. Waterhouse River contributes two thirds and Roper Creek one third to Roper River flow. As the dry season progresses, surface runoff depletes, and groundwater becomes the main component of stream flow. Flow in Waterhouse River is negligible, the Roper Creek dries up, but the Roper River maintains its flow throughout the year. This is due to the groundwater and spring discharge from the highly permeable Tindall Limestone and tufa aquifers. Rainfall seasonality and lithology of both the catchment and aquifers are shown to influence water chemistry. In the wet season, dilution of water bodies by rainwater is the main process. In the dry season, when groundwater provides baseflow to the streams, their chemical composition reflects lithology of the aquifers, in particular the karstic areas. Water chemistry distinguishes four types of aquifer materials described as alluvium, sandstone, limestone and tufa. Surface water in the headwaters of the Waterhouse River, the Roper Creek and their tributaries are freshwater, and reflect the alluvium and sandstone aquifers. At and downstream of the confluence of the Roper River, river water chemistry indicates the influence of rainfall dilution in the wet season, and the signature of the Tindall Limestone and tufa aquifers in the dry. Rainbow Spring on the Waterhouse River and Bitter Spring on the Little Roper River (known as Roper Creek at the headwaters) discharge from the Tindall Limestone. Botanic Walk Spring and Fig Tree Spring discharge into the Roper River from tufa. The source of water was defined based on water chemical composition of the springs, surface and groundwater. The mechanisms controlling surface water chemistry were examined to define the dominance of precipitation, evaporation or rock weathering on the water chemical composition. Simple water balance models for the catchment have been developed. The important aspects to be considered in water resource planning of this total system are the naturally high salinity in the region, especially the downstream sections, and how unpredictable climate variation may impact on the natural seasonal variability of water volumes and surface-subsurface interaction.
Resumo:
Purpose – The purpose of this paper is to jointly assess the impact of regulatory reform for corporate fundraising in Australia (CLERP Act 1999) and the relaxation of ASX admission rules in 1999, on the accuracy of management earnings forecasts in initial public offer (IPO) prospectuses. The relaxation of ASX listing rules permitted a new category of new economy firms (commitments test entities (CTEs))to list without a prior history of profitability, while the CLERP Act (introduced in 2000) was accompanied by tighter disclosure obligations and stronger enforcement action by the corporate regulator (ASIC). Design/methodology/approach – All IPO earnings forecasts in prospectuses lodged between 1998 and 2003 are examined to assess the pre- and post-CLERP Act impact. Based on active ASIC enforcement action in the post-reform period, IPO firms are hypothesised to provide more accurate forecasts, particularly CTE firms, which are less likely to have a reasonable basis for forecasting. Research models are developed to empirically test the impact of the reforms on CTE and non-CTE IPO firms. Findings – The new regulatory environment has had a positive impact on management forecasting behaviour. In the post-CLERP Act period, the accuracy of prospectus forecasts and their revisions significantly improved and, as expected, the results are primarily driven by CTE firms. However, the majority of prospectus forecasts continue to be materially inaccurate. Originality/value – The results highlight the need to control for both the changing nature of listed firms and the level of enforcement action when examining responses to regulatory changes to corporate fundraising activities.
Resumo:
Forecasts generated by time series models traditionally place greater weight on more recent observations. This paper develops an alternative semi-parametric method for forecasting that does not rely on this convention and applies it to the problem of forecasting asset return volatility. In this approach, a forecast is a weighted average of historical volatility, with the greatest weight given to periods that exhibit similar market conditions to the time at which the forecast is being formed. Weighting is determined by comparing short-term trends in volatility across time (as a measure of market conditions) by means of a multivariate kernel scheme. It is found that the semi-parametric method produces forecasts that are significantly more accurate than a number of competing approaches at both short and long forecast horizons.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.