172 resultados para computer programming
Resumo:
Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.
Resumo:
We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.
Resumo:
For a homing interceptor, suitable initial condition must be achieved by mid course guidance scheme for its maximum effectiveness. To achieve desired end goal of any mid course guidance scheme, two point boundary value problem must be solved online with all realistic constrain. A Newly developed computationally efficient technique named as MPSP (Model Predictive Static Programming) is utilized in this paper for obtaining suboptimal solution of optimal mid course guidance. Time to go uncertainty is avoided in this formulation by making use of desired position where midcourse guidance terminate and terminal guidance takes over. A suitable approach angle towards desired point also can be specified in this guidance law formulation. This feature makes this law particularly attractive because warhead effectiveness issue can be indirectly solved in mid course phase.
Resumo:
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.
Resumo:
This paper presents an intelligent procurement marketplace for finding the best mix of web services to dynamically compose the business process desired by a web service requester. We develop a combinatorial auction approach that leads to an integer programming formulation for the web services composition problem. The model takes into account the Quality of Service (QoS) and Service Level Agreements (SLA) for differentiating among multiple service providers who are capable of fulfilling a functionality. An important feature of the model is interface aware composition.
Resumo:
Even though several techniques have been proposed in the literature for achieving multiclass classification using Support Vector Machine(SVM), the scalability aspect of these approaches to handle large data sets still needs much of exploration. Core Vector Machine(CVM) is a technique for scaling up a two class SVM to handle large data sets. In this paper we propose a Multiclass Core Vector Machine(MCVM). Here we formulate the multiclass SVM problem as a Quadratic Programming(QP) problem defining an SVM with vector valued output. This QP problem is then solved using the CVM technique to achieve scalability to handle large data sets. Experiments done with several large synthetic and real world data sets show that the proposed MCVM technique gives good generalization performance as that of SVM at a much lesser computational expense. Further, it is observed that MCVM scales well with the size of the data set.
Resumo:
Support Vector Clustering has gained reasonable attention from the researchers in exploratory data analysis due to firm theoretical foundation in statistical learning theory. Hard Partitioning of the data set achieved by support vector clustering may not be acceptable in real world scenarios. Rough Support Vector Clustering is an extension of Support Vector Clustering to attain a soft partitioning of the data set. But the Quadratic Programming Problem involved in Rough Support Vector Clustering makes it computationally expensive to handle large datasets. In this paper, we propose Rough Core Vector Clustering algorithm which is a computationally efficient realization of Rough Support Vector Clustering. Here Rough Support Vector Clustering problem is formulated using an approximate Minimum Enclosing Ball problem and is solved using an approximate Minimum Enclosing Ball finding algorithm. Experiments done with several Large Multi class datasets such as Forest cover type, and other Multi class datasets taken from LIBSVM page shows that the proposed strategy is efficient, finds meaningful soft cluster abstractions which provide a superior generalization performance than the SVM classifier.
Resumo:
Real-time simulation of deformable solids is essential for some applications such as biological organ simulations for surgical simulators. In this work, deformable solids are approximated to be linear elastic, and an easy and straight forward numerical technique, the Finite Point Method (FPM), is used to model three dimensional linear elastostatics. Graphics Processing Unit (GPU) is used to accelerate computations. Results show that the Finite Point Method, together with GPU, can compute three dimensional linear elastostatic responses of solids at rates suitable for real-time graphics, for solids represented by reasonable number of points.
Resumo:
This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.
Resumo:
Fuzzy multiobjective programming for a deterministic case involves maximizing the minimum goal satisfaction level among conflicting goals of different stakeholders using Max-min approach. Uncertainty due to randomness in a fuzzy multiobjective programming may be addressed by modifying the constraints using probabilistic inequality (e.g., Chebyshev’s inequality) or by addition of new constraints using statistical moments (e.g., skewness). Such modifications may result in the reduction of the optimal value of the system performance. In the present study, a methodology is developed to allow some violation in the newly added and modified constraints, and then minimizing the violation of those constraints with the objective of maximizing the minimum goal satisfaction level. Fuzzy goal programming is used to solve the multiobjective model. The proposed methodology is demonstrated with an application in the field of Waste Load Allocation (WLA) in a river system.