41 resultados para Time-series analysis

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of domestic air travel service in Japan is a product of many factors including airline responses to the changing aviation market, government interventions in terms of regulatory/deregulatory policies, infrastructure investments, and changes in market structure. This paper presents an empirical investigation of the changing quality of passenger airline service and its implications in the domestic aviation market in Japan using qualitative review and a time series analysis of the domestic airline markets from 1986 to 2003. The results show that to meet the ultimate aim of deregulation to increase air passengers’ welfare gain, there is a need to instill measures to correct service imbalance and to create innovative airport demand-capacity management measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research proposes a number of new methods for biomedical time series classification and clustering based on a novel Bag-of-Words (BoW) representation. It is anticipated that the objective and automatic biomedical time series clustering and classification technologies developed in this work will potentially benefit a wide range of applications, such as biomedical data management, archiving, retrieving, and disease diagnosis and prognosis in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To determine the impact of tobacco control policies and mass media campaigns on smoking prevalence in Australian adults.
Methods: Data for calculating the average monthly prevalence of smoking between January 2001 and June 2011 were obtained via structured interviews of randomly sampled adults aged 18 years or older from Australia’s five largest capital cities (monthly mean number of adults interviewed: 2375). The influence on smoking prevalence was estimated for increased tobacco taxes; strengthened smoke-free laws; increased monthly population exposure to televised tobacco control mass media campaigns and pharmaceutical company advertising for nicotine replacement therapy (NRT), using gross ratings points; monthly sales of NRT, bupropion and varenicline; and introduction of graphic health warnings on cigarette packs. Autoregressive integrated moving average (ARIMA) models were used to examine the influence of these interventions on smoking prevalence.
Findings: The mean smoking prevalence for the study period was 19.9% (standard deviation: 2.0%), with a drop from 23.6% (in January 2001) to 17.3% (in June 2011). The best-fitting model showed that stronger smoke-free laws, tobacco price increases and greater exposure to mass media campaigns independently explained 76% of the decrease in smoking prevalence from February 2002 to June 2011.
Conclusion: Increased tobacco taxation, more comprehensive smoke-free laws and increased investment in mass media campaigns played a substantial role in reducing smoking prevalence among Australian adults between 2001 and 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a novel anomaly detection framework for multiple heterogeneous yet correlated time series, such as the medical surveillance series data. In our framework, we propose an anomaly detection algorithm from the viewpoint of trend and correlation analysis. Moreover, to efficiently process huge amount of observed time series, a new clustering-based compression method is proposed. Experimental results indicate that our framework is more effective and efficient than its peers. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to quantify change in marine benthic habitats must be considered a key goal of marine habitat mapping activities. Changes in distribution of distinct suites of benthic biological species may occur as a result of natural or human induced processes and these processes may operate at a range of temporal and spatial scales. It is important to understand natural small scale inter-annual patterns of change in order to separate these signals from potential patterns of longer term change. Work to describe these processes of change from an acoustic remote sensing stand point has thus far been limited due to the relatively recent availability of full coverage swath acoustic datasets and cost pressures associated with multiple surveys of the same area. This paper describes the use of landscape transition analysis as a means to differentiate seemingly random patterns of habitat change from systematic signals of habitat transition at a shallow (10–50 m depth) 18 km2 study area on the temperate Australian continental shelf between the years 2006 and 2007. Supervised classifications for each year were accomplished using independently collected high resolution (3 m cell-size) multibeam echosounder (MBES) and video-derived reference data. Of the 4 representative biotic classes considered, signals of directional systematic changes were observed to occur between a shallow kelp dominated class, a deep sessile invertebrate dominated class and a mixed class of kelp and sessile invertebrates. These signals of change are interpreted as inter-annual variation in the density and depth related extent of canopy forming kelp species at the site, a phenomenon reported in smaller scale temporal studies of the same species. The methods applied in this study provide a detailed analysis of the various components of the traditional change detection cross tabulation matrix allowing identification of the strongest signals of systematic habitat transitions across broad geographical regions. Identifying clear patterns of habitat change is an important first step in linking these patterns to the processes that drive them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis based on the holistic multiple time series system has been a practical and crucial topic. In this paper, we mainly study a new problem that how the data is produced underneath the multiple time series system, which means how to model time series data generating and evolving rules (here denoted as semantics). We assume that there exist a set of latent states, which are the system basis and make the system run: data generating and evolving. Thus, there are several challenges on the problem: (1) How to detect the latent states; (2) How to learn the rules based on the states; (3) What the semantics can be used for. Hence, a novel correlation field-based semantics learning method is proposed to learn the semantics. In the method, we first detect latent state assignment by comprehensively considering kinds of multiple time series characteristics, which contain tick-by-tick data, temporal ordering, relationship among multiple time series and so on. Then, the semantics are learnt by Bayesian Markov characteristic. Actually, the learned semantics could be applied into various applications, such as prediction or anomaly detection for further analysis. Thus, we propose two algorithms based on the semantics knowledge, which are applied to make next-n step prediction and detect anomalies respectively. Some experiments on real world data sets were conducted to show the efficiency of our proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heart rate complexity analysis is a powerful non-invasive means to diagnose several cardiac ailments. Non-linear tools of complexity measurement are indispensable in order to bring out the complete non-linear behavior of Physiological signals. The most popularly used non-linear tools to measure signal complexity are the entropy measures like Approximate entropy (ApEn) and Sample entropy (SampEn). But, these methods become unreliable and inaccurate at times, in particular, for short length data. Recently, a novel method of complexity measurement called Distribution Entropy (DistEn) was introduced, which showed reliable performance to capture complexity of both short term synthetic and short term physiologic data. This study aims to i) examine the competence of DistEn in discriminating Arrhythmia from Normal sinus rhythm (NSR) subjects, using RR interval time series data; ii) explore the level of consistency of DistEn with data length N; and iii) compare the performance of DistEn with ApEn and SampEn. Sixty six RR interval time series data belonging to two groups of cardiac conditions namely `Arrhythmia' and `NSR' have been used for the analysis. The data length N was varied from 50 to 1000 beats with embedding dimension m = 2 for all entropy measurements. Maximum ROC area obtained using ApEn, SampEn and DistEn were 0.83, 0.86 and 0.94 for data length 1000, 1000 and 500 beats respectively. The results show that DistEn undoubtedly exhibits a consistently high performance as a classification feature in comparison with ApEn and SampEn. Therefore, DistEn shows a promising behavior as bio marker for detecting Arrhythmia from short length RR interval data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study a challenging problem of mining data generating rules and state transforming rules (i.e., semantics) underneath multiple correlated time series streams. A novel Correlation field-based Semantics Learning Framework (CfSLF) is proposed to learn the semantic. In the framework, we use Hidden Markov Random Field (HMRF) method to model relationship between latent states and observations in multiple correlated time series to learn data generating rules. The transforming rules are learned from corresponding latent state sequence of multiple time series based on Markov chain character. The reusable semantics learned by CfSLF can be fed into various analysis tools, such as prediction or anomaly detection. Moreover, we present two algorithms based on the semantics, which can later be applied to next-n step prediction and anomaly detection. Experiments on real world data sets demonstrate the efficiency and effectiveness of the proposed method. © Springer-Verlag 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tree index structure is a traditional method for searching similar data in large datasets. It is based on the presupposition that most sub-trees are pruned in the searching process. As a result, the number of page accesses is reduced. However, time-series datasets generally have a very high dimensionality. Because of the so-called dimensionality curse, the pruning effectiveness is reduced in high dimensionality. Consequently, the tree index structure is not a suitable method for time-series datasets. In this paper, we propose a two-phase (filtering and refinement) method for searching time-series datasets. In the filtering step, a quantizing time-series is used to construct a compact file which is scanned for filtering out irrelevant. A small set of candidates is translated to the second step for refinement. In this step, we introduce an effective index compression method named grid-based datawise dimensionality reduction (DRR) which attempts to preserve the characteristics of the time-series. An experimental comparison with existing techniques demonstrates the utility of our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficiency measurement is at the heart of most management accounting functions. Data envelopment analysis (DEA) is a linear programming technique used to measure relative efficiency of organisational units referred in DEA literature as decision making units (DMUs). Universities are complex organisations involving multiple inputs and outputs (Abbott & Doucouliagos, 2008). There is no agreement in identifying and measuring the inputs and outputs of higher education institutes (Avkiran, 2001). Hence, accurate efficiency measurement in such complex institutes needs rigorous research.

Prior DEA studies have investigated the application of the technique at university (Avkiran, 2001; Abbott & Doucouliagos, 2003; Abbott & Doucouliagos, 2008) or department/school (Beasley, 1990; Sinuany-Stern, Mehrez & Barboy, 1994) levels. The organisational unit that has control and hence the responsibility over inputs and outputs is the most appropriate decision making unit (DMU) for DEA to provide useful managerial information. In the current study, DEA has been applied at faculty level for two reasons. First, in the case university, as with most other universities, inputs and outputs are more accurately identified with faculties than departments/schools. Second, efficiency results at university level are highly aggregated and do not provide detail managerial information.

Prior DEA time series studies have used input and output cost and income data without adjusting for changes in time value of money. This study examines the effects of adjusting financial data for changes in dollar values without proportional changes in the quantity of the inputs and the outputs. The study is carried out mainly from management accounting perspective. It is mainly focused on the use of the DEA efficiency information for managerial decision purposes. It is not intended to contribute to the theoretical development of the linear programming model. It takes the view that one does not need to be a mechanic to be a good car driver.

The results suggest that adjusting financial input and output data in time series analysis change efficiency values, rankings, reference set as well as projection amounts. The findings also suggest that the case University could have saved close to $10 million per year if all faculties had operated efficiently. However, it is also recognised that quantitative performance measures have their own limitations and should be used cautiously.