63 resultados para Adaptive Information Dispersal Algorithm
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.
Resumo:
In this paper we present an Orientation Free Adaptive Step Detection (OFASD) algorithm for deployment in a smart phone for the purposes of physical activity monitoring. The OFASD algorithm detects individual steps and measures a user’s step counts using the smart phone’s in-built accelerometer. The algorithm considers both the variance of an individual’s walking pattern and the orientation of the smart phone. Experimental validation of the algorithm involved the collection of data from 10 participants using five phones (worn at five different body positions) whilst walking on a treadmill at a controlled speed for periods of 5 min. Results indicated that, for steps detected by the OFASD algorithm, there were no significant differences between where the phones were placed on the body (p > 0.05). The mean step detection accuracies ranged from 93.4 % to 96.4 %. Compared to measurements acquired using existing dedicated commercial devices, the results demonstrated that using a smart phone for monitoring physical activity is promising, as it adds value to an accepted everyday accessory, whilst imposing minimum interaction from the user. The algorithm can be used as the underlying component within an application deployed within a smart phone designed to promote self-management of chronic disease where activity measurement is a significant factor, as it provides a practical solution, with minimal requirements for user intervention and less constraints than current solutions.
Resumo:
In an adaptive equaliser, the time lag is an important parameter that significantly influences the performance. Only with the optimum time lag that corresponds to the best minimum-mean-square-error (MMSE) performance, can there be best use of the available resources. Many designs, however, choose the time lag either based on preassumption of the channel or simply based on average experience. The relation between the MMSE performance and the time lag is investigated using a new interpretation of the MMSE equaliser, and then a novel adaptive time lag algorithm is proposed based on gradient search. The proposed algorithm can converge to the optimum time lag in the mean and is verified by the numerical simulations provided.
Resumo:
We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 10^64 strategies.
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that the problem is NP-hard even if the underlying graph structure of the problem has small treewidth and the variables take on a bounded number of states, but that a fully polynomial time approximation scheme exists for these cases. Moreover, we show that the bound on the number of states is a necessary condition for any efficient approximation scheme.
Resumo:
The need to merge multiple sources of uncertaininformation is an important issue in many application areas,especially when there is potential for contradictions betweensources. Possibility theory offers a flexible framework to represent,and reason with, uncertain information, and there isa range of merging operators, such as the conjunctive anddisjunctive operators, for combining information. However, withthe proposals to date, the context of the information to be mergedis largely ignored during the process of selecting which mergingoperators to use. To address this shortcoming, in this paper,we propose an adaptive merging algorithm which selects largelypartially maximal consistent subsets (LPMCSs) of sources, thatcan be merged through relaxation of the conjunctive operator, byassessing the coherence of the information in each subset. In thisway, a fusion process can integrate both conjunctive and disjunctiveoperators in a more flexible manner and thereby be morecontext dependent. A comparison with related merging methodsshows how our algorithm can produce a more consensual result.
Resumo:
The coefficients of an echo canceller with a near-end section and a far-end section are usually updated with the same updating scheme, such as the LMS algorithm. A novel scheme is proposed for echo cancellation that is based on the minimisation of two different cost functions, i.e. one for the near-end section and a different one for the far-end section. The approach considered leads to a substantial improvement in performance over the LMS algorithm when it is applied to both sections of the echo canceller. The convergence properties of the algorithm are derived. The proposed scheme is also shown to be robust to noise variations. Simulation results confirm the superior performance of the new algorithm.
Resumo:
We propose a frequency domain adaptive algorithm for
wave separation in wind instruments. Forward and backward travelling waves are obtained from the signals acquired by two microphones placed along the tube, while the
separation ?lter is adapted from the information given by a
third microphone. Working in the frequency domain has a
series of advantages, among which are the ease of design of
the propagation ?lter and its differentiation with respect to
its parameters.
Although the adaptive algorithm was developed as a ?rst
step for the estimation of playing parameters in wind instruments it can also be used, without any modi?cations, for
other applications such as in-air direction of arrival (DOA)
estimation. Preliminary results on these applications will
also be presented.
Resumo:
Adaptive Multiple-Input Multiple-Output (MIMO) systems achieve a much higher information rate than conventional fixed schemes due to their ability to adapt their configurations according to the wireless communications environment. However, current adaptive MIMO detection schemes exhibit either low performance (and hence low spectral efficiency) or huge computational
complexity. In particular, whilst deterministic Sphere Decoder (SD) detection schemes are well established for static MIMO systems, exhibiting deterministic parallel structure, low computational complexity and quasi-ML detection performance, there are no corresponding adaptive schemes. This paper solves
this problem, describing a hybrid tree based adaptive modulation detection scheme. Fixed Complexity Sphere Decoding (FSD) and Real-Values FSD (RFSD) are modified and combined into a hybrid scheme exploited at low and medium SNR to provide the highest possible information rate with quasi-ML Bit Error
Rate (BER) performance, while Reduced Complexity RFSD, BChase and Decision Feedback (DFE) schemes are exploited in the high SNR regions. This algorithm provides the facility to balance the detection complexity with BER performance with compatible information rate in dynamic, adaptive MIMO communications
environments.
Resumo:
In this paper, we propose a novel visual tracking framework, based on a decision-theoretic online learning algorithm namely NormalHedge. To make NormalHedge more robust against noise, we propose an adaptive NormalHedge algorithm, which exploits the historic information of each expert to perform more accurate prediction than the standard NormalHedge. Technically, we use a set of weighted experts to predict the state of the target to be tracked over time. The weight of each expert is online learned by pushing the cumulative regret of the learner towards that of the expert. Our simulation experiments demonstrate the effectiveness of the proposed adaptive NormalHedge, compared to the standard NormalHedge method. Furthermore, the experimental results of several challenging video sequences show that the proposed tracking method outperforms several state-of-the-art methods.