27 resultados para Delay Vector Variance Method (DVV)

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES:

To describe a modified manual cataract extraction technique, sutureless large-incision manual cataract extraction (SLIMCE), and to report its clinical outcomes.

METHODS:

Case notes of 50 consecutive patients with cataract surgery performed using the SLIMCE technique were retrospectively reviewed. Clinical outcomes 3 months after surgery were analyzed, including postoperative uncorrected visual acuity, best-corrected visual acuity, intraoperative and postoperative complications, endothelial cell loss, and surgically induced astigmatism using the vector analysis method.

RESULTS:

At the 3-month follow-up, all 50 patients had postoperative best-corrected visual acuity of at least 20/60, and 37 patients (74%) had visual acuity of at least 20/30. Uncorrected visual acuity was at least 20/68 in 28 patients (56%) and was between 20/80 and 20/200 in 22 patients (44%). No significant intraoperative complications were encountered, and sutureless wounds were achieved in all but 2 patients. At the 3-month follow-up, endothelial cell loss was 3.9%, and the mean surgically induced astigmatism was 0.69 diopter.

CONCLUSIONS:

SLIMCE is a safe and effective manual cataract extraction technique with low rates of surgically induced astigmatism and endothelial cell loss. In view of its low cost, SLIMCE may have a potential role in reducing cataract blindness in developing countries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Support vector machine (SVM) is a powerful technique for data classification. Despite of its good theoretic foundations and high classification accuracy, normal SVM is not suitable for classification of large data sets, because the training complexity of SVM is highly dependent on the size of data set. This paper presents a novel SVM classification approach for large data sets by using minimum enclosing ball clustering. After the training data are partitioned by the proposed clustering method, the centers of the clusters are used for the first time SVM classification. Then we use the clusters whose centers are support vectors or those clusters which have different classes to perform the second time SVM classification. In this stage most data are removed. Several experimental results show that the approach proposed in this paper has good classification accuracy compared with classic SVM while the training is significantly faster than several other SVM classifiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image segmentation plays an important role in the analysis of retinal images as the extraction of the optic disk provides important cues for accurate diagnosis of various retinopathic diseases. In recent years, gradient vector flow (GVF) based algorithms have been used successfully to successfully segment a variety of medical imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods can lead to less accurate segmentation results in certain cases. In this paper, we propose the use of a new mean shift-based GVF segmentation algorithm that drives the internal/external energies towards the correct direction. The proposed method incorporates a mean shift operation within the standard GVF cost function to arrive at a more accurate segmentation. Experimental results on a large dataset of retinal images demonstrate that the presented method optimally detects the border of the optic disc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Time Division Multiple Access (TDMA) protocol provides deterministic periodic collision free data transmissions. However, TDMA lacks flexibility and exhibits low efficiency in dynamic environments such as wireless LANs. On the other hand contention-based MAC protocols such as the IEEE 802.11 DCF are adaptive to network dynamics but are generally inefficient in heavily loaded or large networks. To take advantage of the both types of protocols, a D-CVDMA protocol is proposed. It is based on the k-round elimination contention (k-EC) scheme, which provides fast contention resolution for Wireless LANs. D-CVDMA uses a contention mechanism to achieve TDMA-like collision-free data transmissions, which does not need to reserve time slots for forthcoming transmissions. These features make the D-CVDMA robust and adaptive to network dynamics such as node leaving and joining, changes in packet size and arrival rate, which in turn make it suitable for the delivery of hybrid traffic including multimedia and data content. Analyses and simulations demonstrate that D-CVDMA outperforms the IEEE 802.11 DCF and k-EC in terms of network throughput, delay, jitter, and fairness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a predictive current control strategy for doubly-fed induction generators (DFIG). The method predicts the DFIG’s rotor current variations in the synchronous reference frame fixed to the stator flux within a fixed sampling period. This is then used to directly calculate the required rotor voltage to eliminate the current errors at the end of the following sampling period. Space vector modulation is used to generate the required switching pulses within the fixed sampling period. The impact of sampling delay on the accuracy of the sampled rotor current is analyzed and detailed compensation methods are proposed to improve the current control accuracy and system stability. Experimental results for a 1.5 kW DFIG system illustrate the effectiveness and robustness of the proposed control strategy during rotor current steps and rotating speed variation. Tests during negative sequence current injection further demonstrate the excellent dynamic performance of the proposed PCC method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global increase in the penetration of renewable energy is pushing electrical power systems into uncharted territory, especially in terms of transient and dynamic stability. In particular, the greater penetration of wind generation in European power networks is, at times, displacing a significant capacity of conventional synchronous generation with fixed-speed induction generation and now more commonly, doubly fed induction generators. The impact of such changes in the generation mix requires careful monitoring to assess the impact on transient and dynamic stability. This study presents a measurement-based method for the early detection of power system oscillations, with consideration of mode damping, in order to raise alarms and develop strategies to actively improve power system dynamic stability and security. A method is developed based on wavelet-based support vector data description (SVDD) to detect oscillation modes in wind farm output power, which may excite dynamic instabilities in the wider system. The wavelet transform is used as a filter to identify oscillations in frequency bands, whereas the SVDD method is used to extract dominant features from different scales and generate an assessment boundary according to the extracted features. Poorly damped oscillations of a large magnitude, or that are resonant, can be alarmed to the system operator, to reduce the risk of system instability. The proposed method is exemplified using measured data from a chosen wind farm site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modelled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of four process factors: pH, emulsifier (gelatin) concentration, mixing and batch, on the % w/w entrapment of propranolol hydrochloride in ethylcellulose microcapsules prepared by the solvent evaporation process were examined using a factorial design. In this design the minimum % w/w entrapments of propranolol hydrochloride were observed whenever the external aqueous phase contained 1.5% w/v gelatin at pH 6.0 (0.71-0.91% w/w) whereas maximum entrapments occurred whenever the external aqueous phase was composed of 0.5% w/v gelatin at pH 9.0,(8.9-9.1% w/w). The theoretical maximum loading was 50% w/w. Statistical evaluation of the results by analysis of variance showed that emulsifer (gelatin) concentration and pH, but not mixing and batch significantly affected entrapment. An interaction between pH and gelatin concentration was observed in the factorial design which was accredited to the greater effect of gelatin concentration on % w/w entrapment at pH 9.0 than at pH 6.0. Maximum theoretical entrapment was achieved by increasing the pH of the external phase to 12.0. Marked increases in drug entrapment were observed whenever the pH of the external phase exceeded the pK(2) of propranolol hydrochloride. It was concluded that pH, and hence ionisation, was the greatest determinant of entrapment of propranolol hydrochloride into microcapsules prepared by the solvent evaporation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The R-matrix incorporating time (RMT) method is a method developed recently for solving the time-dependent Schrödinger equation for multielectron atomic systems exposed to intense short-pulse laser light. We have employed the RMT method to investigate the time delay in the photoemission of an electron liberated from a 2p orbital in a neon atom with respect to one released from a 2s orbital following absorption of an attosecond xuv pulse. Time delays due to xuv pulses in the range 76-105 eV are presented. For an xuv pulse at the experimentally relevant energy of 105.2 eV, we calculate the time delay to be 10.2±1.3 attoseconds (as), somewhat larger than estimated by other theoretical calculations, but still a factor of 2 smaller than experiment. We repeated the calculation for a photon energy of 89.8 eV with a larger basis set capable of modeling correlated-electron dynamics within the neon atom and the residual Ne ion. A time delay of 14.5±1.5 as was observed, compared to a 16.7±1.5 as result using a single-configuration representation of the residual Ne+ ion. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.


--------------------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method is proposed which reduces the size of the memory needed to implement multirate vector quantizers. Investigations have shown that the performance of the coders implemented using this approach is comparable to that obtained from standard systems. The proposed method can therefore be used to reduce the hardware required to implement real-time speech coders.