966 resultados para Fast algorithm
Resumo:
Due to its wide applicability, semi-supervised learning is an attractive method for using unlabeled data in classification. In this work, we present a semi-supervised support vector classifier that is designed using quasi-Newton method for nonsmooth convex functions. The proposed algorithm is suitable in dealing with very large number of examples and features. Numerical experiments on various benchmark datasets showed that the proposed algorithm is fast and gives improved generalization performance over the existing methods. Further, a non-linear semi-supervised SVM has been proposed based on a multiple label switching scheme. This non-linear semi-supervised SVM is found to converge faster and it is found to improve generalization performance on several benchmark datasets. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In many wireless applications, it is highly desirable to have a fast mechanism to resolve or select the packet from the user with the highest priority. Furthermore, individual priorities are often known only locally at the users. In this paper we introduce an extremely fast, local-information-based multiple access algorithm that selects the best node in 1.8 to 2.1 slots,which is much lower than the 2.43 slot average achieved by the best algorithm known to date. The algorithm, which we call Variable Power Multiple Access Selection (VP-MAS), uses the local channel state information from the accessing nodes to the receiver, and maps the priorities into the receive power.It is inherently distributed and scales well with the number of users. We show that mapping onto a discrete set of receive power levels is optimal, and provide a complete characterization for it. The power levels are chosen to exploit packet capture that inherently occurs in a wireless physical layer. The VP-MAS algorithm adjusts the expected number of users that contend in each step and their respective transmission powers, depending on whether previous transmission attempts resulted in capture,idle channel, or collision.
Resumo:
With the emergence of large-volume and high-speed streaming data, the recent techniques for stream mining of CFIpsilas (closed frequent itemsets) will become inefficient. When concept drift occurs at a slow rate in high speed data streams, the rate of change of information across different sliding windows will be negligible. So, the user wonpsilat be devoid of change in information if we slide window by multiple transactions at a time. Therefore, we propose a novel approach for mining CFIpsilas cumulatively by making sliding width(ges1) over high speed data streams. However, it is nontrivial to mine CFIpsilas cumulatively over stream, because such growth may lead to the generation of exponential number of candidates for closure checking. In this study, we develop an efficient algorithm, stream-close, for mining CFIpsilas over stream by exploring some interesting properties. Our performance study reveals that stream-close achieves good scalability and has promising results.
Resumo:
Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
A torque control scheme, based on a direct torque control (DTC) algorithm using a 12-sided polygonal voltage space vector, is proposed for a variable speed control of an open-end induction motor drive. The conventional DTC scheme uses a stator flux vector for the sector identification and then the switching vector to control stator flux and torque. However, the proposed DTC scheme selects switching vectors based on the sector information of the estimated fundamental stator voltage vector and its relative position with respect to the stator flux vector. The fundamental stator voltage estimation is based on the steady-state model of IM and the synchronous frequency of operation is derived from the computed stator flux using a low-pass filter technique. The proposed DTC scheme utilizes the exact positions of the fundamental stator voltage vector and stator flux vector to select the optimal switching vector for fast control of torque with small variation of stator flux within the hysteresis band. The present DTC scheme allows full load torque control with fast transient response to very low speeds of operation, with reduced switching frequency variation. Extensive experimental results are presented to show the fast torque control for speed of operation from zero to rated.
Resumo:
Summary form only given. A scheme for code compression that has a fast decompression algorithm, which can be implemented using simple hardware, is proposed. The effectiveness of the scheme on the TMS320C62x architecture that includes the overheads of a line address table (LAT) is evaluated and obtained compression rates ranging from 70% to 80%. Two schemes for decompression are proposed. The basic idea underlying the scheme is a simple clustering algorithm that partially maps a block of instructions into a set of clusters. The clustering algorithm is a greedy algorithm based on the frequency of occurrence of various instructions.
Resumo:
This letter presents a microprocessor-based algorithm for calculating symmetrical components from the distorted transient voltage and current signals in a power system. The fundamental frequency components of the 3-phase signals are first extracted using an algorithm based on Haar functions and then "symmetrical-component transformation is applied to obtain the sequence components. The algorithm presented is computationally efficient and fast. This algorithm is better suited for application in microprocessor-based protection schemes of synchronous and induction machines.
Resumo:
This paper presents an artificial feed forward neural network (FFNN) approach for the assessment of power system voltage stability. A novel approach based on the input-output relation between real and reactive power, as well as voltage vectors for generators and load buses is used to train the neural net (NN). The input properties of the feed forward network are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The neural network is trained for the L-index output as the target vector for each of the system loads. Two separate trained NN, corresponding to normal loading and contingency, are investigated on the 367 node practical power system network. The performance of the trained artificial neural network (ANN) is also investigated on the system under various voltage stability assessment conditions. As compared to the computationally intensive benchmark conventional software, near accurate results in the value of L-index and thus the voltage profile were obtained. Proposed algorithm is fast, robust and accurate and can be used online for predicting the L-indices of all the power system buses. The proposed ANN approach is also shown to be effective and computationally feasible in voltage stability assessment as well as potential enhancements within an overall energy management system in order to determining local and global stability indices
Resumo:
Precision inspection of manufactured components having multiple complex surfaces and variable tolerance definition is an involved, complex and time-consuming function. In routine practice, a jig is used to present the part in a known reference frame to carry out the inspection process. Jigs involve both time and cost in their development, manufacture and use. This paper describes 'as is where is inspection' (AIWIN), a new automated inspection technique that accelerates the inspection process by carrying out a fast registration procedure and establishing a quick correspondence between the part to inspect and its CAD geometry. The main challenge in doing away with a jig is that the inspection reference frame could be far removed from the CAD frame. Traditional techniques based on iterative closest point (ICP) or Newton methods require either a large number of iterations for convergence or fail in such a situation. A two-step coarse registration process is proposed to provide a good initial guess for a modified ICP algorithm developed earlier (Ravishankar et al., Int J Adv Manuf Technol 46(1-4):227-236, 2010). The first step uses a calibrated sphere for local hard registration and fixing the translation error. This transformation locates the centre for the sphere in the CAD frame. In the second step, the inverse transformation (involving pure rotation about multiple axes) required to align the inspection points measured on the manufactured part with the CAD point dataset of the model is determined and enforced. This completes the coarse registration enabling fast convergence of the modified ICP algorithm. The new technique has been implemented on complex freeform machined components and the inspection results clearly show that the process is precise and reliable with rapid convergence. © 2011 Springer-Verlag London Limited.
Resumo:
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) has been recently studied by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain a best ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.
Resumo:
In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.
Resumo:
The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) was introduced by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain an optimal ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.
Resumo:
A nearly constant switching frequency current hysteresis controller for a 2-level inverter fed induction motor drive is proposed in this paper: The salient features of this controller are fast dynamics for the current, inherent protection against overloads and less switching frequency variation. The large variation of switching frequency as in the conventional hysteresis controller is avoided by defining a current-error boundary which is obtained from the current-error trajectory of the standard space vector PWM. The current-error boundary is computed at every sampling interval based on the induction machine parameters and from the estimated fundamental stator voltage. The stator currents are always monitored and when the current-error exceeds the boundary, voltage space vector is switched to reduce the current-error. The proposed boundary computation algorithm is applicable in linear and over-modulation region and it is simple to implement in any standard digital signal processor: Detailed experimental verification is done using a 7.5 kW induction motor and the results are given to show the performance of the drive at various operating conditions and validate the proposed advantages.