64 resultados para Continuous exercise


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To determine, using unsupervised walking programmes, the effects of exercise at a level lower than currently recommended to improve cardiovascular risk factors and functional capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study entanglement accumulation in a memory built out of two continuous variable systems interacting with a qubit that mediates their indirect coupling. We show that, in contrast with the case of bidimensional Hilbert spaces, entanglement superior to one ebit can be accumulated in the memory, even though no entangled resource is used. The protocol is immediately implementable and we assess the role of the main imperfections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to the Mickael's selection theorem any surjective continuous linear operator from one Fr\'echet space onto another has a continuous (not necessarily linear) right inverse. Using this theorem Herzog and Lemmert proved that if $E$ is a Fr\'echet space and $T:E\to E$ is a continuous linear operator such that the Cauchy problem $\dot x=Tx$, $x(0)=x_0$ is solvable in $[0,1]$ for any $x_0\in E$, then for any $f\in C([0,1],E)$, there exists a continuos map $S:[0,1]\times E\to E$, $(t,x)\mapsto S_tx$ such that for any $x_0\in E$, the function $x(t)=S_tx_0$ is a solution of the Cauchy problem $\dot x(t)=Tx(t)+f(t)$, $x(0)=x_0$ (they call $S$ a fundamental system of solutions of the equation $\dot x=Tx+f$). We prove the same theorem, replacing "continuous" by "sequentially continuous" for locally convex spaces from a class which contains strict inductive limits of Fr\'echet spaces and strong duals of Fr\'echet--Schwarz spaces and is closed with respect to finite products and sequentially closed subspaces. The key-point of the proof is an extension of the theorem on existence of a sequentially continuous right inverse of any surjective sequentially continuous linear operator to some class of non-metrizable locally convex spaces.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.