11 resultados para time complexity

em University of Queensland eSpace - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Machine learning techniques have been recognized as powerful tools for learning from data. One of the most popular learning techniques, the Back-Propagation (BP) Artificial Neural Networks, can be used as a computer model to predict peptides binding to the Human Leukocyte Antigens (HLA). The major advantage of computational screening is that it reduces the number of wet-lab experiments that need to be performed, significantly reducing the cost and time. A recently developed method, Extreme Learning Machine (ELM), which has superior properties over BP has been investigated to accomplish such tasks. In our work, we found that the ELM is as good as, if not better than, the BP in term of time complexity, accuracy deviations across experiments, and most importantly - prevention from over-fitting for prediction of peptide binding to HLA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give a simple proof of a formula for the minimal time required to simulate a two-qubit unitary operation using a fixed two-qubit Hamiltonian together with fast local unitaries. We also note that a related lower bound holds for arbitrary n-qubit gates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the n-back task has been widely applied to neuroimagery investigations of working memory (WM), the role of practice effects on behavioural performance of this task has not yet been investigated. The current study aimed to investigate the effects of task complexity and familiarity on the n-back task. Seventy-seven participants (39 male, 38 female) completed a visuospatial n-back task four times, twice in two testing sessions separated by a week. Participants were required to remember either the first, second or third (n-back) most recent letter positions in a continuous sequence and to indicate whether the current item matched or did not match the remembered position. A control task, with no working memory requirements required participants to match to a predetermined stimulus position. In both testing sessions, reaction time (RT) and error rate increased with increasing WM load. An exponential slope for RTs in the first session indicated dual-task interference at the 3-back level. However, a linear slope in the second session indicated a reduction of dual-task interference. Attenuation of interference in the second session suggested a reduction in executive demands of the task with practice. This suggested that practice effects occur within the n-back ask and need to be controlled for in future neuroimagery research using the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study reported in this article is a part of a large-scale study investigating syntactic complexity in second language (L2) oral data in commonly taught foreign languages (English, German, Japanese, and Spanish; Ortega, Iwashita, Rabie, & Norris, in preparation). In this article, preliminary findings of the analysis of the Japanese data are reported. Syntactic complexity, which is referred to as syntactic maturity or the use of a range of forms with degrees of sophistication (Ortega, 2003), has long been of interest to researchers in L2 writing. In L2 speaking, researchers have examined syntactic complexity in learner speech in the context of pedagogic intervention (e.g., task type, planning time) and the validation of rating scales. In these studies complexity is examined using measures commonly employed in L2 writing studies. It is assumed that these measures are valid and reliable, but few studies explain what syntactic complexity measures actually examine. The language studied is predominantly English, and little is known about whether the findings of such studies can be applied to languages that are typologically different from English. This study examines how syntactic complexity measures relate to oral proficiency in Japanese as a foreign language. An in-depth analysis of speech samples from 33 learners of Japanese is presented. The results of the analysis are compared across proficiency levels and cross-referenced with 3 other proficiency measures used in the study. As in past studies, the length of T-units and the number of clauses per T-unit is found to be the best way to predict learner proficiency; the measure also had a significant linear relation with independent oral proficiency measures. These results are discussed in light of the notion of syntactic complexity and the interfaces between second language acquisition and language testing. Adapted from the source document

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative filtering is regarded as one of the most promising recommendation algorithms. The item-based approaches for collaborative filtering identify the similarity between two items by comparing users' ratings on them. In these approaches, ratings produced at different times are weighted equally. That is to say, changes in user purchase interest are not taken into consideration. For example, an item that was rated recently by a user should have a bigger impact on the prediction of future user behaviour than an item that was rated a long time ago. In this paper, we present a novel algorithm to compute the time weights for different items in a manner that will assign a decreasing weight to old data. More specifically, the users' purchase habits vary. Even the same user has quite different attitudes towards different items. Our proposed algorithm uses clustering to discriminate between different kinds of items. To each item cluster, we trace each user's purchase interest change and introduce a personalized decay factor according to the user own purchase behaviour. Empirical studies have shown that our new algorithm substantially improves the precision of item-based collaborative filtering without introducing higher order computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MICE (meetings, incentives, conventions, and exhibitions), has generated high foreign exchange revenue for the economy worldwide. In Thailand, MICE tourists are recognized as ‘quality’ visitors, mainly because of their high-spending potential. Having said that, Thailand’s MICE sector has been influenced by a number of crises following September 11, 2001. Consequently, professionals in the MICE sector must be prepared to deal with such complex phenomena of crisis that might happen in the future. While a number of researches have examined the complexity of crises in the tourism context, there has been little focus on such issues in the MICE sector. As chaos theory provides a particularly good model for crisis situations, it is the aim of this paper to propose a chaos theory-based approach to the understanding of complex and chaotic system of the MICE sector in time of crisis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method for the timing analysis of concurrent real-time programs with hard deadlines. We divide the analysis into a machine-independent and a machine-dependent task. The latter takes into account the execution times of the program on a particular machine. Therefore, our goal is to make the machine-dependent phase of the analysis as simple as possible. We succeed in the sense that the machine-dependent phase remains the same as in the analysis of sequential programs. We shift the complexity introduced by concurrency completely to the machine-independent phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.