323 resultados para quadratic polynomial
Resumo:
This paper presents a new application of two dimensional Principal Component Analysis (2DPCA) to the problem of online character recognition in Tamil Script. A novel set of features employing polynomial fits and quartiles in combination with conventional features are derived for each sample point of the Tamil character obtained after smoothing and resampling. These are stacked to form a matrix, using which a covariance matrix is constructed. A subset of the eigenvectors of the covariance matrix is employed to get the features in the reduced sub space. Each character is modeled as a separate subspace and a modified form of the Mahalanobis distance is derived to classify a given test character. Results indicate that the recognition accuracy using the 2DPCA scheme shows an approximate 3% improvement over the conventional PCA technique.
Resumo:
This paper introduces a scheme for classification of online handwritten characters based on polynomial regression of the sampled points of the sub-strokes in a character. The segmentation is done based on the velocity profile of the written character and this requires a smoothening of the velocity profile. We propose a novel scheme for smoothening the velocity profile curve and identification of the critical points to segment the character. We also porpose another method for segmentation based on the human eye perception. We then extract two sets of features for recognition of handwritten characters. Each sub-stroke is a simple curve, a part of the character, and is represented by the distance measure of each point from the first point. This forms the first set of feature vector for each character. The second feature vector are the coeficients obtained from the B-splines fitted to the control knots obtained from the segmentation algorithm. The feature vector is fed to the SVM classifier and it indicates an efficiency of 68% using the polynomial regression technique and 74% using the spline fitting method.
Resumo:
We consider a framework in which several service providers offer downlink wireless data access service in a certain area. Each provider serves its end-users through opportunistic secondary spectrum access of licensed spectrum, and needs to pay primary license holders of the spectrum usage based and membership based charges for such secondary spectrum access. In these circumstances, if providers pool their resources and allow end-users to be served by any of the cooperating providers, the total user satisfaction as well as the aggregate revenue earned by providers may increase. We use coalitional game theory to investigate such cooperation among providers, and show that the optimal cooperation schemes can be obtained as solutions of convex optimizations. We next show that under usage based charging scheme, if all providers cooperate, there always exists an operating point that maximizes the aggregate revenue of providers, while presenting each provider a share of the revenue such that no subset of providers has an incentive to leave the coalition. Furthermore, such an operating point can be computed in polynomial time. Finally, we show that when the charging scheme involves membership based charges, the above result holds in important special cases.
Resumo:
We investigate the feasibility of developing a comprehensive gate delay and slew models which incorporates output load, input edge slew, supply voltage, temperature, global process variations and local process variations all in the same model. We find that the standard polynomial models cannot handle such a large heterogeneous set of input variables. We instead use neural networks, which are well known for their ability to approximate any arbitrary continuous function. Our initial experiments with a small subset of standard cell gates of an industrial 65 nm library show promising results with error in mean less than 1%, error in standard deviation less than 3% and maximum error less than 11% as compared to SPICE for models covering 0.9- 1.1 V of supply, -40degC to 125degC of temperature, load, slew and global and local process parameters. Enhancing the conventional libraries to be voltage and temperature scalable with similar accuracy requires on an average 4x more SPICE characterization runs.
Resumo:
We investigate the feasibility of developing a comprehensive gate delay and slew models which incorporates output load, input edge slew, supply voltage, temperature, global process variations and local process variations all in the same model. We find that the standard polynomial models cannot handle such a large heterogeneous set of input variables. We instead use neural networks, which are well known for their ability to approximate any arbitrary continuous function. Our initial experiments with a small subset of standard cell gates of an industrial 65 nm library show promising results with error in mean less than 1%, error in standard deviation less than 3% and maximum error less than 11% as compared to SPICE for models covering 0.9- 1.1 V of supply, -40degC to 125degC of temperature, load, slew and global and local process parameters. Enhancing the conventional libraries to be voltage and temperature scalable with similar accuracy requires on an average 4x more SPICE characterization runs.
Resumo:
In this paper we incorporate a novel approach to synthesize a class of closed-loop feedback control, based on the variational structure assignment. Properties of a viscoelastic system are used to design an active feedback controller for an undamped structural system with distributed sensor, actuator and controller. Wave dispersion properties of onedimensional beam system have been studied. Efficiency of the chosen viscoelastic model in enhancing damping and stability properties of one-dimensional viscoelastic bar have been analyzed. The variational structure is projected on a solution space of a closed-loop system involving a weakly damped structure with distributed sensor and actuator with controller. These assign the phenomenology based internal strain rate damping parameter of a viscoelastic system to the usual elastic structure but with active control. In the formulation a model of cantilever beam with non-collocated actuator and sensor has been considered. The formulation leads to the matrix identification problem of two dynamic stiffness matrices. The method has been simplified to obtain control system gains for the free vibration control of a cantilever beam system with collocated actuator-sensor, using quadratic optimal control and pole-placement methods.
Resumo:
Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.
Resumo:
Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].
Resumo:
This paper elucidates the methodology of applying artificial neural network model (ANNM) to predict the percent swell of calcitic soil in sulphuric acid solutions, a complex phenomenon involving many parameters. Swell data required for modelling is experimentally obtained using conventional oedometer tests under nominal surcharge. The phases in ANN include optimal design of architecture, operation and training of architecture. The designed optimal neural model (3-5-1) is a fully connected three layer feed forward network with symmetric sigmoid activation function and trained by the back propagation algorithm to minimize a quadratic error criterion.The used model requires parameters such as duration of interaction, calcite mineral content and acid concentration for prediction of swell. The observed strong correlation coefficient (R2 = 0.9979) between the values determined by the experiment and predicted using the developed model demonstrates that the network can provide answers to complex problems in geotechnical engineering.
Resumo:
A robust aeroelastic optimization is performed to minimize helicopter vibration with uncertainties in the design variables. Polynomial response surfaces and space-¯lling experimental designs are used to generate the surrogate model of aeroelastic analysis code. Aeroelastic simulations are performed at the sample inputs generated by Latin hypercube sampling. The response values which does not satisfy the frequency constraints are eliminated from the data for model ¯tting. This step increased the accuracy of response surface models in the feasible design space. It is found that the response surface models are able to capture the robust optimal regions of design space. The optimal designs show a reduction of 10 percent in the objective function comprising six vibratory hub loads and 1.5 to 80 percent reduction for the individual vibratory forces and moments. This study demonstrates that the second-order response surface models with space ¯lling-designs can be a favorable choice for computationally intensive robust aeroelastic optimization.
Resumo:
Linear stability and the nonmodal transient energy growth in compressible plane Couette flow are investigated for two prototype mean flows: (a) the uniform shear flow with constant viscosity, and (b) the nonuniform shear flow with stratified viscosity. Both mean flows are linearly unstable for a range of supersonic Mach numbers (M). For a given M, the critical Reynolds number (Re) is significantly smaller for the uniform shear flow than its nonuniform shear counterpart; for a given Re, the dominant instability (over all streamwise wave numbers, α) of each mean flow belongs to different modes for a range of supersonic M. An analysis of perturbation energy reveals that the instability is primarily caused by an excess transfer of energy from mean flow to perturbations. It is shown that the energy transfer from mean flow occurs close to the moving top wall for “mode I” instability, whereas it occurs in the bulk of the flow domain for “mode II.” For the nonmodal transient growth analysis, it is shown that the maximum temporal amplification of perturbation energy, Gmax, and the corresponding time scale are significantly larger for the uniform shear case compared to those for its nonuniform counterpart. For α=0, the linear stability operator can be partitioned into L∼L̅ +Re2 Lp, and the Re-dependent operator Lp is shown to have a negligibly small contribution to perturbation energy which is responsible for the validity of the well-known quadratic-scaling law in uniform shear flow: G(t∕Re)∼Re2. In contrast, the dominance of Lp is responsible for the invalidity of this scaling law in nonuniform shear flow. An inviscid reduced model, based on Ellingsen-Palm-type solution, has been shown to capture all salient features of transient energy growth of full viscous problem. For both modal and nonmodal instability, it is shown that the viscosity stratification of the underlying mean flow would lead to a delayed transition in compressible Couette flow.
Resumo:
Even though several techniques have been proposed in the literature for achieving multiclass classification using Support Vector Machine(SVM), the scalability aspect of these approaches to handle large data sets still needs much of exploration. Core Vector Machine(CVM) is a technique for scaling up a two class SVM to handle large data sets. In this paper we propose a Multiclass Core Vector Machine(MCVM). Here we formulate the multiclass SVM problem as a Quadratic Programming(QP) problem defining an SVM with vector valued output. This QP problem is then solved using the CVM technique to achieve scalability to handle large data sets. Experiments done with several large synthetic and real world data sets show that the proposed MCVM technique gives good generalization performance as that of SVM at a much lesser computational expense. Further, it is observed that MCVM scales well with the size of the data set.
Resumo:
An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check (LDPC) codes achieving high girth is presented. These codes are near regular in the sense that the degree of a left/right vertex is allowed to differ by at most one from the average. The construction yields in quadratic time complexity an asymptotic code family with provable lower bounds on the rate and the girth for a given choice of block length and average degree. The construction gives flexibility in the choice of design parameters of the code like rate, girth and average degree. Performance simulations of iterative decoding algorithm for the AWGN channel on codes designed using the method demonstrate that these codes perform better than regular PEG codes and MacKay codes of similar length for all values of Signal to noise ratio.
Resumo:
We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using circular linear progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.
Resumo:
Support Vector Clustering has gained reasonable attention from the researchers in exploratory data analysis due to firm theoretical foundation in statistical learning theory. Hard Partitioning of the data set achieved by support vector clustering may not be acceptable in real world scenarios. Rough Support Vector Clustering is an extension of Support Vector Clustering to attain a soft partitioning of the data set. But the Quadratic Programming Problem involved in Rough Support Vector Clustering makes it computationally expensive to handle large datasets. In this paper, we propose Rough Core Vector Clustering algorithm which is a computationally efficient realization of Rough Support Vector Clustering. Here Rough Support Vector Clustering problem is formulated using an approximate Minimum Enclosing Ball problem and is solved using an approximate Minimum Enclosing Ball finding algorithm. Experiments done with several Large Multi class datasets such as Forest cover type, and other Multi class datasets taken from LIBSVM page shows that the proposed strategy is efficient, finds meaningful soft cluster abstractions which provide a superior generalization performance than the SVM classifier.