7 resultados para Superlinear and Semi–Superlinear Convergence
em University of Queensland eSpace - Australia
Resumo:
In this paper we study the following p(x)-Laplacian problem: -div(a(x)&VERBAR;&DEL; u&VERBAR;(p(x)-2)&DEL; u)+b(x)&VERBAR; u&VERBAR;(p(x)-2)u = f(x, u), x ε &UOmega;, u = 0, on &PARTIAL; &UOmega;, where 1< p(1) &LE; p(x) &LE; p(2) < n, &UOmega; &SUB; R-n is a bounded domain and applying the mountain pass theorem we obtain the existence of solutions in W-0(1,p(x)) for the p(x)-Laplacian problems in the superlinear and sublinear cases. © 2004 Elsevier Inc. All rights reserved.
Resumo:
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results.
Resumo:
Fast Classification (FC) networks were inspired by a biologically plausible mechanism for short term memory where learning occurs instantaneously. Both weights and the topology for an FC network are mapped directly from the training samples by using a prescriptive training scheme. Only two presentations of the training data are required to train an FC network. Compared with iterative learning algorithms such as Back-propagation (which may require many hundreds of presentations of the training data), the training of FC networks is extremely fast and learning convergence is always guaranteed. Thus FC networks may be suitable for applications where real-time classification is needed. In this paper, the FC networks are applied for the real-time extraction of gene expressions for Chlamydia microarray data. Both the classification performance and learning time of the FC networks are compared with the Multi-Layer Proceptron (MLP) networks and support-vector-machines (SVM) in the same classification task. The FC networks are shown to have extremely fast learning time and comparable classification accuracy.
Resumo:
This paper presents an overlapping generations model with physical and human capital and income inequality. It shows that inequality impedes output growth by directly harming capital accumulation and indirectly raising the ratio of physical to human capital. The convergence speed of output growth equals the lower of the convergence speeds of the relative capital ratio and inequality, and varies with initial states. Among economies with the same balanced growth rate but different initial income levels, the ranking of income can switch in favor of those starting from low inequality and a low ratio of physical to human capital, particularly if the growth rate converges slowly. (C) 2004 Elsevier B.V. All rights reserved.