21 resultados para Series Summation Method

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wind power prediction refers to an approximation of the probable production of wind turbines in the near future. We present a time series ensemble framework to predict wind power. Time series wind data is transformed using a number of complementary methods. Wind power is predicted on each transformed feature space. Predictions are aggregated using a neural network at a second stage. The proposed framework is validated on wind data obtained from ten different locations across Australia. Experimental results demonstrate that the ensemble predictor performs better than the base predictors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The convergence among house prices has attracted much attention from researchers. Previous research mainly utilised a time-series regression method to investigate convergences of house prices, which may ignore the heterogeneity of houses across cities. This research developed a panel regression method, by which the heterogeneity of house prices can be captured. Seemingly unrelated regression estimators were also adapted to deal with the contemporary correlations across cities. Investigation of the convergence among house prices in the Australian capital cities was carried out by using the developed panel regression method. Results suggested that house prices converge in Sydney, Adelaide and Hobart but diverge in Darwin.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The tree index structure is a traditional method for searching similar data in large datasets. It is based on the presupposition that most sub-trees are pruned in the searching process. As a result, the number of page accesses is reduced. However, time-series datasets generally have a very high dimensionality. Because of the so-called dimensionality curse, the pruning effectiveness is reduced in high dimensionality. Consequently, the tree index structure is not a suitable method for time-series datasets. In this paper, we propose a two-phase (filtering and refinement) method for searching time-series datasets. In the filtering step, a quantizing time-series is used to construct a compact file which is scanned for filtering out irrelevant. A small set of candidates is translated to the second step for refinement. In this step, we introduce an effective index compression method named grid-based datawise dimensionality reduction (DRR) which attempts to preserve the characteristics of the time-series. An experimental comparison with existing techniques demonstrates the utility of our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently DTW (dynamic time warping) has been recognized as the most robust distance function to measure the similarity between two time series, and this fact has spawned a flurry of research on this topic. Most indexing methods proposed for DTW are based on the R-tree structure. Because of high dimensionality and loose lower bounds for time warping distance, the pruning power of these tree structures are quite weak, resulting in inefficient search. In this paper, we propose a dimensionality reduction method motivated by observations about the inherent character of each time series. A very compact index file is constructed. By scanning the index file, we can get a very small candidate set, so that the number of page access is dramatically reduced. We demonstrate the effectiveness of our approach on real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. NaCl has proven to be an effective bitterness inhibitor, but the reason remains unclear. The purpose of this study was to examine the influence of a variety of cations and anions on the bitterness of selected oral pharmaceuticals and bitter taste stimuli: pseudoephedrine, ranitidine, acetaminophen, quinine, and urea.
Method. Human psychophysical taste evaluation using a whole mouth exposure procedure was used.
Results. The cations (all associated with the acetate anion) inhibited bitterness when mixed with pharmaceutical solutions to varying degrees. The sodium cation significantly (P < 0.003) inhibited bitterness of the pharmaceuticals more than the other cations. The anions (all associated with the sodium cation) also inhibited bitterness to varying degrees. With the exception of salicylate, the glutamate and adenosine monophosphate anions significantly (P < 0.001) inhibited bitterness of the pharmaceuticals more than the other anions. Also, there were several specific inhibitory interactions between ammonium, sodium and salicylate and certain pharmaceuticals.
Conclusions. We conclude that sodium was the most successful cation and glutamate and AMP were the most successful anions at inhibiting bitterness. Structure forming and breaking properties of ions, as predicted by the Hofmeister series, and other physical-chemical ion properties failed to significantly predict bitterness inhibition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drawing as a means of recording is a very common practice in junior primary science lessons. This is largely due to the availability of necessary materials. Also, most youg children have some degree of drawing skill and enjoy drawing activities. Since 1956 the science curriculum to be implemented in primary classrooms in Victoria has changed from one that was based largely on nature study (biological) to one that includes physical and technological aspects. Further, there have been changes in the teaching methodologies advocated for use in science lessons. A modified Interactive Teaching Approach was used for the studies. Drawing was the main means by which the children recorded information. The topic of 'shells' was used to enable collection of data about the children's enjoyment of the activity and satisfaction with their achievement. This study was replicated using the topic 'rocks'; again data were collected concerning satisfaction and enjoyment. During a series of lessons on 'snails' data were collected concerning the achievement of 'process' and 'objective' purposes that teachers might have in mind when setting a drawing activity. In addition to providing data about purposes the study stimulated some questions regarding the techniques the children had used in their drawings. Accordingly, data concerning the use of graphic techniques by the children were collected during a series of lessons on 'oils'. The data collected and analysed in the various studies highlighted the value of drawing in junior primary school science lessons. It also validated strategies developed by the author and designed to help teachers and children use drawing effectively in science activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper continues the prior research undertaken by Warren and Leitch (2009), in which a series of initial research findings were presented. These findings identified that in Australia, Supply Chain Management (SCM) systems were the weak link of Australian critical infrastructure. This paper focuses upon the security and risk issues associated with SCM systems and puts forward a new SCM Security Risk Management method, continuing the research presented at the European Conference of Information Warfare in 2009.This paper proposes a new Security Risk Analysis model that deals with the complexity of protecting SCM critical infrastructure systems and also introduces a new approach that organisations can apply to protect their SCM systems. The paper describes the importance of SCM systems from a critical infrastructure protection perspective. The paper then discusses the importance of SCM systems in relation to supporting centres of populations and gives examples of the impact of failure. The paper proposes a new SCM security risk analysis method that deals with the security issues related to SCM security and the security issues associated with Information Security. The paper will also discuss a risk framework that can be used to protect against high and low level associated security risks using a new SCM security risk analysis method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature aggregation is a critical technique in content-based image retrieval (CBIR) that combines multiple feature distances to obtain image dissimilarity. Conventional parallel feature aggregation (PFA) schemes failed to effectively filter out the irrelevant images using individual visual features before ranking images in collection. Series feature aggregation (SFA) is a new scheme that aims to address this problem. This paper investigates three important properties of SFA that are significant for design of systems. They reveal the irrelevance of feature order and the convertibility of SFA and PFA as well as the superior performance of SFA. Furthermore, based on Gaussian kernel density estimator, the authors propose a new method to estimate the visual threshold, which is the key parameter of SFA. Experiments, conducted with IAPR TC-12 benchmark image collection (ImageCLEF2006) that contains over 20,000 photographic images and defined queries, have shown that SFA can outperform conventional PFA schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature aggregation is a critical technique in content-based image retrieval (CBIR) that combines multiple feature distances to obtain image dissimilarity. Conventional parallel feature aggregation (PFA) schemes failed to effectively filter out the irrelevant images using individual visual features before ranking images in collection. Series feature aggregation (SFA) is a new scheme that aims to address this problem. This paper investigates three important properties of SFA that are significant for design of systems. They reveal the irrelevance of feature order and the convertibility of SFA and PFA as well as the superior performance of SFA. Furthermore, based on Gaussian kernel density estimator, the authors propose a new method to estimate the visual threshold, which is the key parameter of SFA. Experiments, conducted with IAPR TC-12 benchmark image collection (ImageCLEF2006) that contains over 20,000 photographic images and defined queries, have shown that SFA can outperform conventional PFA schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical inspection techniques have been widely used in industry as they are non-destructive. Since defect patterns are rooted from the manufacturing processes in semiconductor industry, efficient and effective defect detection and pattern recognition algorithms are in great demand to find out closely related causes. Modifying the manufacturing processes can eliminate defects, and thus to improve the yield. Defect patterns such as rings, semicircles, scratches, and clusters are the most common defects in the semiconductor industry. Conventional methods cannot identify two scale-variant or shift-variant or rotation-variant defect patterns, which in fact belong to the same failure causes. To address these problems, a new approach is proposed in this paper to detect these defect patterns in noisy images. First, a novel scheme is developed to simulate datasets of these 4 patterns for classifiers' training and testing. Second, for real optical images, a series of image processing operations have been applied in the detection stage of our method. In the identification stage, defects are resized and then identified by the trained support vector machine. Adaptive resonance theory network 1 is also implemented for comparisons. Classification results of both simulated data and real noisy raw data show the effectiveness of our method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 In anticipation of helping students mature from passive to more active learners while engaging with the issues and concepts surrounding computer security, a student generated Multiple Choice Question (MCQ) learning strategy was designed and deployed as a replacement for an assessment task that was previously based on students providing solutions to a series of short-answer questions. To determine whether there was any educational value in students generating their own MCQs students were required to design MCQs. Prior to undertaking this assessment activity each participant completed a pre-test which consisted of 45 MCQs based on the topics of the assessment. Following the assessment activity the participants completed a post-test which consisted of the same MCQs as the pre-test. The pre and post test results as well as the post test and assessment activity results were tested for statistical significance. The results indicated that having students generate their own MCQs as a method of assessment did not have a negative effect on the learning experience. By providing a framework to the students based on the literature to support their engagement with the learning material, we believe the creation of well-structured MCQs resulted in a more advanced understanding of the relationships between the concepts of the learning material as compared with plainly answering a series of short-answer questions from a textbook. Further study is required to determine to what degree this learning strategy encouraged a deeper approach to learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polypyrrole is a material with immensely useful properties suitable for a wide range of electrochemical applications, but its development has been hindered by cumbersome manufacturing processes. Here we show that a simple modification to the standard electrochemical polymerization method produces polypyrrole films of equivalently high conductivity and superior mechanical properties in one-tenth of the polymerization time. Preparing the film as a series of electrodeposited layers with thorough solvent washing between layering was found to produce excellent quality films even when layer deposition was accelerated by high current. The washing step between the sequentially polymerized layers altered the deposition mechanism, eliminating the typical dendritic growth and generating nonporous deposits. Solvent washing was shown to reduce the concentration of oligomeric species in the near-electrode region and hinder the three-dimensional growth mechanism that occurs by deposition of secondary particles from solution. As artificial muscles, the high density sequentially polymerized films produced the highest mechanical work output yet reported for polypyrrole actuators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis based on the holistic multiple time series system has been a practical and crucial topic. In this paper, we mainly study a new problem that how the data is produced underneath the multiple time series system, which means how to model time series data generating and evolving rules (here denoted as semantics). We assume that there exist a set of latent states, which are the system basis and make the system run: data generating and evolving. Thus, there are several challenges on the problem: (1) How to detect the latent states; (2) How to learn the rules based on the states; (3) What the semantics can be used for. Hence, a novel correlation field-based semantics learning method is proposed to learn the semantics. In the method, we first detect latent state assignment by comprehensively considering kinds of multiple time series characteristics, which contain tick-by-tick data, temporal ordering, relationship among multiple time series and so on. Then, the semantics are learnt by Bayesian Markov characteristic. Actually, the learned semantics could be applied into various applications, such as prediction or anomaly detection for further analysis. Thus, we propose two algorithms based on the semantics knowledge, which are applied to make next-n step prediction and detect anomalies respectively. Some experiments on real world data sets were conducted to show the efficiency of our proposed method.