31 resultados para computational complexity

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Long term evolution (LTE) is designed for high speed data rate, higher spectral efficiency, and lower latency as well as high-capacity voice support. LTE uses single carrierfrequency division multiple access (SC-FDMA) scheme for the uplink transmission and orthogonal frequency division multiple access (OFDMA) in downlink. The one of the most important challenges for a terminal implementation are channel estimation (CE) and equalization. In this paper, a minimum mean square error (MMSE) based channel estimator is proposed for an OFDMA systems that can avoid the ill-conditioned least square (LS) problem with lower computational complexity. This channel estimation technique uses knowledge of channel properties to estimate the unknown channel transfer function at non-pilot subcarriers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

At first blush, user modeling appears to be a prime candidate for straightforward application of standard machine learning techniques. Observations of the user's behavior can provide training examples that a machine learning system can use to form a model designed to predict future actions. However, user modeling poses a number of challenges for machine learning that have hindered its application in user modeling, including: the need for large data sets; the need for labeled data; concept drift; and computational complexity. This paper examines each of these issues and reviews approaches to resolving them.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today’s state-of-the-art ammunition Doppler radars use the Fourier spectrogram for the joint time-frequency analysis of ammunition Doppler signals. In this paper, we implement the joint time-frequency analysis of ammunition Doppler signals based on the theory of wavelet packets. The wavelet-based approach is demonstrated on Doppler signals for projectile velocity measurement, projectile inbore velocity measurement and on modulated Doppler signal for projectile spin rate measurement. The wavelet-based representation with its good resolution in time and frequency and reasonable computational complexity as compared to the Fourier spectrogram is a good alternative for the joint time-frequency analysis of ammunition Doppler signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Discovering a precise causal structure accurately reflecting the given data is one of the most essential tasks in the area of data mining and machine learning. One of the successful causal discovery approaches is the information-theoretic approach using the Minimum Message Length Principle[19]. This paper presents an improved and further experimental results of the MML discovery algorithm. We introduced a new encoding scheme for measuring the cost of describing the causal structure. Stiring function is also applied to further simplify the computational complexity and thus works more efficiently. The experimental results of the current version of the discovery system show that: (1) the current version is capable of discovering what discovered by previous system; (2) current system is capable of discovering more complicated causal models with large number of variables; (3) the new version works more efficiently compared with the previous version in terms of time complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes an optimal strategy for extracting probabilistic rules from databases. Two inductive learning-based statistic measures and their rough set-based definitions: accuracy and coverage are introduced. The simplicity of a rule emphasized in this paper has previously been ignored in the discovery of probabilistic rules. To avoid the high computational complexity of rough-set approach, some rough-set terminologies rather than the approach itself are applied to represent the probabilistic rules. The genetic algorithm is exploited to find the optimal probabilistic rules that have the highest accuracy and coverage, and shortest length. Some heuristic genetic operators are also utilized in order to make the global searching and evolution of rules more efficiently. Experimental results have revealed that it run more efficiently and generate probabilistic classification rules of the same integrity when compared with traditional classification methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an examination report on the performance of the improved MML based causal model discovery algorithm. In this paper, We firstly describe our improvement to the causal discovery algorithm which introduces a new encoding scheme for measuring the cost of describing the causal structure. Stiring function is also applied to further simplify the computational complexity and thus works more efficiently. It is followed by a detailed examination report on the performance of our improved discovery algorithm. The experimental results of the current version of the discovery system show that: (l) the current version is capable of discovering what discovered by previous system; (2) current system is capable of discovering more complicated causal networks with large number of variables; (3) the new version works more efficiently compared with the previous version in terms of time complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biological sequence assembly is an essential step for sequencing the genomes of organisms. Sequence assembly is very computing intensive especially for the large-scale sequence assembly. Parallel computing is an effective way to reduce the computing time and support the assembly for large amount of biological fragments. Euler sequence assembly algorithm is an innovative algorithm proposed recently. The advantage of this algorithm is that its computing complexity is polynomial and it provides a better solution to the notorious “repeat” problem. This paper introduces the parallelization of the Euler sequence assembly algorithm. All the Genome fragments generated by whole genome shotgun (WGS) will be assembled as a whole rather than dividing them into groups which may incurs errors due to the inaccurate group partition. The implemented system can be run on supercomputers, network of workstations or even network of PC computers. The experimental results have demonstrated the performance of our system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Minor component analysis (MCA) is an important statistical tool for signal processing and data analysis. Neural networks can be used to extract online minor component from input data. Compared with traditional algebraic  approaches, a neural network method has a lower computational complexity. Stability of neural networks learning algorithms is crucial to practical applications. In this paper, we propose a stable MCA neural networks learning algorithm, which has a more satisfactory numerical stability than some existing MCA algorithms. Dynamical behaviors of the proposed algorithm are analyzed via deterministic discrete time (DDT) method and the conditions are obtained to guarantee convergence. Simulations are carried out to illustrate the theoretical results achieved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A few of clustering techniques for categorical data exist to group objects having similar characteristics. Some are able to handle uncertainty in the clustering process while others have stability issues. However, the performance of these techniques is an issue due to low accuracy and high computational complexity. This paper proposes a new technique called maximum dependency attributes (MDA) for selecting clustering attribute. The proposed approach is based on rough set theory by taking into account the dependency of attributes of the database. We analyze and compare the performance of MDA technique with the bi-clustering, total roughness (TR) and min–min roughness (MMR) techniques based on four test cases. The results establish the better performance of the proposed approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As one of the primary substances in a living organism, protein defines the character of each cell by interacting with the cellular environment to promote the cell’s growth and function [1]. Previous studies on proteomics indicate that the functions of different proteins could be assigned based upon protein structures [2,3]. The knowledge on protein structures gives us an overview of protein fold space and is helpful for the understanding of the evolutionary principles behind structure. By observing the architectures and topologies of the protein families, biological processes can be investigated more directly with much higher resolution and finer detail. For this reason, the analysis of protein, its structure and the interaction with the other materials is emerging as an important problem in bioinformatics. However, the determination of protein structures is experimentally expensive and time consuming, this makes scientists largely dependent on sequence rather than more general structure to infer the function of the protein at the present time. For this reason, data mining technology is introduced into this area to provide more efficient data processing and knowledge discovery approaches.

Unlike many data mining applications which lack available data, the protein structure determination problem and its interaction study, on the contrary, could utilize a vast amount of biologically relevant information on protein and its interaction, such as the protein data bank (PDB) [4], the structural classification of proteins (SCOP) databases [5], CATH databases [6], UniProt [7], and others. The difficulty of predicting protein structures, specially its 3D structures, and the interactions between proteins as shown in Figure 6.1, lies in the computational complexity of the data. Although a large number of approaches have been developed to determine the protein structures such as ab initio modelling [8], homology modelling [9] and threading [10], more efficient and reliable methods are still greatly needed.

In this chapter, we will introduce a state-of-the-art data mining technique, graph mining, which is good at defining and discovering interesting structural patterns in graphical data sets, and take advantage of its expressive power to study protein structures, including protein structure prediction and comparison, and protein-protein interaction (PPI). The current graph pattern mining methods will be described, and typical algorithms will be presented, together with their applications in the protein structure analysis.

The rest of the chapter is organized as follows: Section 6.2 will give a brief introduction of the fundamental knowledge of protein, the publicly accessible protein data resources and the current research status of protein analysis; in Section 6.3, we will pay attention to one of the state-of-the-art data mining methods, graph mining; then Section 6.4 surveys several existing work for protein structure analysis using advanced graph mining methods in the recent decade; finally, in Section 6.5, a conclusion with potential further work will be summarized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While the largest common subgraph (LCSG) between a query and a database of models can provide an elegant and intuitive measure of similarity for many applications, it is computationally expensive to compute. Recently developed algorithms for subgraph isomorphism detection take advantage of prior knowledge of a database of models to improve the speed of on-line matching. This paper presents a new algorithm based on similar principles to solve the largest common subgraph problem. The new algorithm significantly reduces the computational complexity of detection of the LCSG between a known database of models, and a query given on-line.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hybrid electric vehicles are powered by an electric system and an internal combustion engine. The components of a hybrid electric vehicle need to be coordinated in an optimal manner to deliver the desired performance. This paper presents an approach based on direct method for optimal power management in hybrid electric vehicles with inequality constraints. The approach consists of reducing the optimal control problem to a set of algebraic equations by approximating the state variable which is the energy of electric storage, and the control variable which is the power of fuel consumption. This approximation uses orthogonal functions with unknown coefficients. In addition, the inequality constraints are converted to equal constraints. The advantage of the developed method is that its computational complexity is less than that of dynamic and non-linear programming approaches. Also, to use dynamic or non-linear programming, the problem should be discretized resulting in the loss of optimization accuracy. The propsed method, on the other hand, does not require the discretization of the problem producing more accurate results. An example is solved to demonstrate the accuracy of the proposed approach. The results of Haar wavelets, and Chebyshev and Legendre polynomials are presented and discussed. © 2011 The Korean Society of Automotive Engineers and Springer-Verlag Berlin Heidelberg.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, a simple yet powerful branch-and-bound method called Efficient Subwindow Search (ESS) was developed to speed up sliding window search in object detection. A major drawback of ESS is that its computational complexity varies widely from O(n2) to O(n4) for n × n matrices. Our experimental experience shows that the ESS's performance is highly related to the optimal confidence levels which indicate the probability of the object's presence. In particular, when the object is not in the image, the optimal subwindow scores low and ESS may take a large amount of iterations to converge to the optimal solution and so perform very slow. Addressing this problem, we present two significantly faster methods based on the linear-time Kadane's Algorithm for 1D maximum subarray search. The first algorithm is a novel, computationally superior branchand- bound method where the worst case complexity is reduced to O(n3). Experiments on the PASCAL VOC 2006 data set demonstrate that this method is significantly and consistently faster (approximately 30 times faster on average) than the original ESS. Our second algorithm is an approximate algorithm based on alternating search, whose computational complexity is typically O(n2). Experiments shows that (on average) it is 30 times faster again than our first algorithm, or 900 times faster than ESS. It is thus wellsuited for real time object detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an efficient evaluation algorithm for cross-validating the two-stage approach of KFD classifiers. The proposed algorithm is of the same complexity level as the existing indirect efficient cross-validation methods but it is more reliable since it is direct and constitutes exact cross-validation for the KFD classifier formulation. Simulations demonstrate that the proposed algorithm is almost as fast as the existing fast indirect evaluation algorithm and the twostage cross-validation selects better models on most of the thirteen benchmark data sets.