82 resultados para Non Linear Time Series

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Birth weight and length have seasonal fluctuations. Previous analyses of birth weight by latitude effects identified seemingly contradictory results, showing both 6 and 12 monthly periodicities in weight. The aims of this paper are twofold: (a) to explore seasonal patterns in a large, Danish Medical Birth Register, and (b) to explore models based on seasonal exposures and a non-linear exposure-risk relationship. Methods Birth weight and birth lengths on over 1.5 million Danish singleton, live births were examined for seasonality. We modelled seasonal patterns based on linear, U- and J-shaped exposure-risk relationships. We then added an extra layer of complexity by modelling weighted population-based exposure patterns. Results The Danish data showed clear seasonal fluctuations for both birth weight and birth length. A bimodal model best fits the data, however the amplitude of the 6 and 12 month peaks changed over time. In the modelling exercises, U- and J-shaped exposure-risk relationships generate time series with both 6 and 12 month periodicities. Changing the weightings of the population exposure risks result in unexpected properties. A J-shaped exposure-risk relationship with a diminishing population exposure over time fitted the observed seasonal pattern in the Danish birth weight data. Conclusion In keeping with many other studies, Danish birth anthropometric data show complex and shifting seasonal patterns. We speculate that annual periodicities with non-linear exposure-risk models may underlie these findings. Understanding the nature of seasonal fluctuations can help generate candidate exposures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Objectives: To investigate the effect of hot and cold temperatures on ambulance attendances. Design: An ecological time series study. Setting and participants: The study was conducted in Brisbane, Australia. We collected information on 783 935 daily ambulance attendances, along with data of associated meteorological variables and air pollutants, for the period of 2000–2007. Outcome measures: The total number of ambulance attendances was examined, along with those related to cardiovascular, respiratory and other non-traumatic conditions. Generalised additive models were used to assess the relationship between daily mean temperature and the number of ambulance attendances. Results: There were statistically significant relationships between mean temperature and ambulance attendances for all categories. Acute heat effects were found with a 1.17% (95% CI: 0.86%, 1.48%) increase in total attendances for 1 °C increase above threshold (0–1 days lag). Cold effects were delayed and longer lasting with a 1.30% (0.87%, 1.73%) increase in total attendances for a 1 °C decrease below the threshold (2–15 days lag). Harvesting was observed following initial acute periods of heat effects, but not for cold effects. Conclusions: This study shows that both hot and cold temperatures led to increases in ambulance attendances for different medical conditions. Our findings support the notion that ambulance attendance records are a valid and timely source of data for use in the development of local weather/health early warning systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Extreme temperatures are associated with cardiovascular disease (CVD) deaths. Previous studies have investigated the relative CVD mortality risk of temperature, but this risk is heavily influenced by deaths in frail elderly persons. To better estimate the burden of extreme temperatures we estimated their effects on years of life lost due to CVD. Methods and Results: The data were daily observations on weather and CVD mortality for Brisbane, Australia between 1996 and 2004. We estimated the association between daily mean temperature and years of life lost due to CVD, after adjusting for trend, season, day of the week, and humidity. To examine the non-linear and delayed effects of temperature, a distributed lag non-linear model was used. The model’s residuals were examined to investigate if there were any added effects due to cold spells and heat waves. The exposure-response curve between temperature and years of life lost was U-shaped, with the lowest years of life lost at 24 °C. The curve had a sharper rise at extremes of heat than of cold. The effect of cold peaked two days after exposure, whereas the greatest effect of heat occurred on the day of exposure. There were significantly added effects of heat waves on years of life lost. Conclusions: Increased years of life lost due to CVD are associated with both cold and hot temperatures. Research on specific interventions is needed to reduce temperature-related years of life lost from CVD deaths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Hot and cold temperatures have been associated with childhood asthma. However, the relationship between daily temperature variation and childhood asthma is not well understood. This study aimed to examine the relationship between diurnal temperature range (DTR) and childhood asthma. METHODS: A Poisson generalized linear model combined with a distributed lag non-linear model was used to examine the relationship between DTR and emergency department admissions for childhood asthma in Brisbane, from January 1st 2003 to December 31st 2009. RESULTS: There was a statistically significant relationship between DTR and childhood asthma. The DTR effect on childhood asthma increased above a DTR of 10[degree sign]C. The effect of DTR on childhood asthma was the greatest for lag 0--9 days, with a 31% (95% confidence interval: 11% -- 58%) increase of emergency department admissions per 5[degree sign]C increment of DTR. Male children and children aged 5--9 years appeared to be more vulnerable to the DTR effect than others. CONCLUSIONS: Large DTR may trigger childhood asthma. Future measures to control and prevent childhood asthma should include taking temperature variability into account. More protective measures should be taken after a day of DTR above10[degree sign]C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The association between temperature and mortality has been examined mainly in North America and Europe. However, less evidence is available in developing countries, especially in Thailand. In this study, we examined the relationship between temperature and mortality in Chiang Mai city, Thailand, during 1999–2008. Method A time series model was used to examine the effects of temperature on cause-specific mortality (non-external, cardiopulmonary, cardiovascular, and respiratory) and age-specific non-external mortality (<=64, 65–74, 75–84, and > =85 years), while controlling for relative humidity, air pollution, day of the week, season and long-term trend. We used a distributed lag non-linear model to examine the delayed effects of temperature on mortality up to 21 days. Results We found non-linear effects of temperature on all mortality types and age groups. Both hot and cold temperatures resulted in immediate increase in all mortality types and age groups. Generally, the hot effects on all mortality types and age groups were short-term, while the cold effects lasted longer. The relative risk of non-external mortality associated with cold temperature (19.35°C, 1st percentile of temperature) relative to 24.7°C (25th percentile of temperature) was 1.29 (95% confidence interval (CI): 1.16, 1.44) for lags 0–21. The relative risk of non-external mortality associated with high temperature (31.7°C, 99th percentile of temperature) relative to 28°C (75th percentile of temperature) was 1.11 (95% CI: 1.00, 1.24) for lags 0–21. Conclusion This study indicates that exposure to both hot and cold temperatures were related to increased mortality. Both cold and hot effects occurred immediately but cold effects lasted longer than hot effects. This study provides useful data for policy makers to better prepare local responses to manage the impact of hot and cold temperatures on population health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the following non-linear fractional reaction–subdiffusion process (NFR-SubDP): Formula where f(u, x, t) is a linear function of u, the function g(u, x, t) satisfies the Lipschitz condition and 0Dt1–{gamma} is the Riemann–Liouville time fractional partial derivative of order 1 – {gamma}. We propose a new computationally efficient numerical technique to simulate the process. Firstly, the NFR-SubDP is decoupled, which is equivalent to solving a non-linear fractional reaction–subdiffusion equation (NFR-SubDE). Secondly, we propose an implicit numerical method to approximate the NFR-SubDE. Thirdly, the stability and convergence of the method are discussed using a new energy method. Finally, some numerical examples are presented to show the application of the present technique. This method and supporting theoretical results can also be applied to fractional integrodifferential equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.