6 resultados para nonsmooth optimization

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We investigate parallelization and performance of the discrete gradient method of nonsmooth optimization. This derivative free method is shown to be an effective optimization tool, able to skip many shallow local minima of nonconvex nondifferentiable objective functions. Although this is a sequential iterative method, we were able to parallelize critical steps of the algorithm, and this lead to a significant improvement in performance on multiprocessor computer clusters. We applied this method to a difficult polyatomic clusters problem in computational chemistry, and found this method to outperform other algorithms.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The process of sleep stage identification is a labour-intensive task that involves the specialized interpretation of the polysomnographic signals captured from a patient’s overnight sleep session. Automating this task has proven to be challenging for data mining algorithms because of noise, complexity and the extreme size of data. In this paper we apply nonsmooth optimization to extract key features that lead to better accuracy. We develop a specific procedure for identifying K-complexes, a special type of brain wave crucial for distinguishing sleep stages. The procedure contains two steps. We first extract “easily classified” K-complexes, and then apply nonsmooth optimization methods to extract features from the remaining data and refine the results from the first step. Numerical experiments show that this procedure is efficient for detecting K-complexes. It is also found that most classification methods perform significantly better on the extracted features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine a mathematical model of non-destructive testing of planar waveguides, based on numerical solution of a nonlinear integral equation. Such problem is ill-posed, and the method of Tikhonov regularization is applied. To minimize Tikhonov functional, and find the parameters of the waveguide, we use two new optimization methods: the cutting angle method of global optimization, and the discrete gradient method of nonsmooth local optimization. We examine how the noise in the experimental data influences the solution, and how the regularization parameter has to be chosen. We show that even with significant noise in the data, the numerical solution is of high accuracy, and the method can be used to process real experimental da.ta..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21-41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a class of nonsmooth convex optimization problems where the objective function is a convex differentiable function regularized by the sum of the group reproducing kernel norm and (Formula presented.)-norm of the problem variables. This class of problems has many applications in variable selections such as the group LASSO and sparse group LASSO. In this paper, we propose a proximal Landweber Newton method for this class of convex optimization problems, and carry out the convergence and computational complexity analysis for this method. Theoretical analysis and numerical results show that the proposed algorithm is promising.