176 resultados para Chen-Burer algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors appreciate the discusser’s interest in the original paper and for the valuable discussion, which provides the opportunity to clarify and reiterate a few points made in the original paper. The comments and questions raised by the discusser are addressed in the following sections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gait period estimation is an important step in the gait recognition framework. In this paper, we propose a new gait cycle detection method based on the angles of extreme points of both legs. In addition to that, to further improve the estimation of the gait period, the proposed algorithm divides the gait sequence into sections before identifying the maximum values. The proposed algorithm is scale invariant and less dependent on the silhouette shape. The performance of the proposed method was evaluated using the OU-ISIR speed variation gait database. The experimental results show that the proposed method achieved 90.2% gait recognition accuracy and outperforms previous methods found in the literature with the second best only achieved 67.65% accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

his paper considers a problem of identification for a high dimensional nonlinear non-parametric system when only a limited data set is available. The algorithms are proposed for this purpose which exploit the relationship between the input variables and the output and further the inter-dependence of input variables so that the importance of the input variables can be established. A key to these algorithms is the non-parametric two stage input selection algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient and robust case sorting algorithm based on Extended Equal Area Criterion (EEAC) is proposed in this paper for power system transient stability assessment (TSA). The time-varying degree of an equivalent image system can be deduced by comparing the analysis results of Static EEAC (SEEAC) and Dynamic EEAC (DEEAC), the former of which neglects all time-varying factors while the latter partially considers the time-varying factors. Case sorting rules according to their transient stability severity are set combining the time-varying degree and fault messages. Then a case sorting algorithm is designed with the “OR” logic among multiple rules, based on which each case can be identified into one of the following five categories, namely stable, suspected stable, marginal, suspected unstable and unstable. The performance of this algorithm is verified by studying 1652 contingency cases from 9 real Chinese provincial power systems under various operating conditions. It is shown that desirable classification accuracy can be achieved for all the contingency cases at the cost of very little extra computational burden and only 9.81% of the whole cases need to carry out further detailed calculation in rigorous on-line TSA conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Clostridium difficile (C. difficile) is a leading cause of infectious diarrhoea in hospitals. Sending faecal samples for testing expedites diagnosis and appropriate treatment. Clinical suspicion of C. difficile based on patient history, signs and symptoms is the basis for sampling. Sending faecal samples from patients with diarrhoea ‘just in case’ the patient has C. difficile may be an indication of poor clinical management.

Aim: To evaluate the effectiveness of an intervention by an Infection Prevention and Control Team (IPCT) in reducing inappropriate faecal samples sent for C. difficile testing.

Method: An audit of numbers of faecal samples sent before and after a decision-making algorithm was introduced. The number of samples received in the laboratory was retrospectively counted for 12-week periods before and after an algorithm was introduced.
Findings: There was a statistically significant reduction in the mean number of faecal samples sent post the algorithm. Results were compared to a similar intervention carried out in 2009 in which the same message was delivered by a memorandum. In 2009 the memorandum had no effect on the overall number of weekly samples being sent.

Conclusion: An algorithm intervention had an effect on the number of faecal samples being sent for C. difficile testing and thus contributed to the effective use of the laboratory service.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Richardson–Lucy algorithm is one of the most important in image deconvolution. However, a drawback is its slow convergence. A significant acceleration was obtained using the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the image processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the heavy-ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has a proof of the convergence rateof O(K^2), where k is the number of iterations. We demonstrate the superior convergence performance, by a speedup factor off ive, of the scaled H-B method on both synthetic and real 3D images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a tensegrity-based co-operative control algorithm for an aircraft formation. The 6 degrees-of-freedom model of the well-known Aerosonde unmanned aerial vehicle (UAV), is integrated with the model of the tensegrity structure and a decentralised control scheme is proposed. The strategy is shown to be scalable for 2n number of UAVs and is able to maintain a firm geometry whilst allowing flexible shape transformations. Simulation results demonstrate the effectiveness and stability of the proposed tensegrity-based formation control algorithm in 3D.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a hybrid mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber loop. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Moreover, our proposed method outperforms the minimum mean-squared error design for a range of TEQ filter lengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comprehensive continuum damage mechanics model [1] had been developed to capture the detailed
behaviour of a composite structure under a crushing load. This paper explores some of the difficulties
encountered in the implementation of this model and their mitigation. The use of reduced integration
element and a strain softening model both negatively affect the accuracy and stability of the
simulation. Damage localisation effects demanded an accurate measure of characteristic length. A
robust algorithm for determining the characteristic length was implemented. Testing showed that this
algorithm produced marked improvements over the use of the default characteristic length provided
by Abaqus. Zero-energy or hourglass modes, in reduced integration elements, led to reduced
resistance to bending. This was compounded by the strain softening model, which led to the formation
of elements with little resistance to deformation that could invert if left unchecked. It was shown,
through benchmark testing, that by deleting elements with excess distortions and controlling the mesh
using inbuilt distortion/hourglass controls, these issues can be alleviated. These techniques
contributed significantly to the viability and usability of the damage model.