737 resultados para Sparse Incremental Em Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A percepção subjetiva de esforço (PSE) é determinada de forma não invasiva e utilizada juntamente com a resposta lactacidêmica como indicadores de intensidade durante teste incremental. em campo, especialmente na natação, há dificuldades nas coletas sanguíneas; por isso, utilizam-se protocolos alternativos para estimar o limiar anaeróbio. Assim, os objetivos do estudo foram: prescrever um teste incremental baseado na PSE (Borg 6-20) visando estimar os limiares metabólicos determinados por métodos lactacidêmicos [ajuste bi-segmentado (V LL), concentração fixa-3,5mM (V3,5mM) e distância máxima (V Dmáx)]; relacionar a PSE atribuída em cada estágio com a freqüência cardíaca (FC) e com parâmetros mecânicos de nado [freqüência (FB) e amplitude de braçada (AB)], analisar a utilização da escala 6-20 na regularidade do incremento das velocidades no teste e correlacionar os limiares metabólicos com a velocidade crítica (VC). Para isso, 12 nadadores (16,4 ± 1,3 anos) realizaram dois esforços máximos (200 e 400m); os dados foram utilizados para determinar a VC, velocidade de 400m (V400m) e a freqüência crítica de braçada (FCb); e um teste incremental com intensidade inicial baseada na PSE, respectivamente, 9, 11, 13, 15 e 17; sendo monitorados em todos os estágios a FC, lactacidêmia e os tempos de quatro ciclos de braçadas e das distâncias de 20m (parte central da piscina) e 50m. Posteriormente, foram calculadas as velocidades dos estágios, FB, AB, V LL, V3,5mM e V Dmáx. Utilizaram-se ANOVA e correlação de Pearson para análise dos resultados. Não foram encontradas diferenças entre VC, V Dmáx e V LL, porém a V3,5mM foi inferior às demais velocidades (P < 0,05). Correlações significativas (P < 0,05) foram observadas entre VC versus V400m, V Dmáx e V3,5mM; V400m versus V3,5mM e V Dmáx; V Dmáx versus V LL; e no teste incremental entre PSE versus velocidade, [Lac], FC, FB e AB (P < 0,05). Concluímos que a PSE é uma ferramenta confiável no controle da velocidade dos estágios durante teste incremental na natação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente estudo comparou valores de glicemia, frequência cardíaca em repouso e durante exercício, além da composição corporal entre hipertensos e normotensos. A amostra foi composta por 32 jovens do sexo masculino, com média de idade de 22,6 anos. Inicialmente, aferiu-se a pressão arterial, para divisão em dois grupos: hipertensos e normotensos. Posteriormente foram mensurados, glicemia em jejum, impedância bioelétrica, antropometria, e a frequência cardíaca no repouso, durante o teste de esforço máximo e na fase de recuperação. A análise estatística foi composta pelo teste t- Student e análise de variância para medidas repetidas two-way, entre os grupos. O valor de significância adotado foi p = 0,05. Os dados analisados mostraram que indivíduos hipertensos apresentam maiores índices metabólicos e valores hemodinâmicos do que indivíduos normotensos, sendo estes indicadores de risco cardiovascular.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new algorithm for Reverse Monte Carlo (RMC) simulations of liquids. During the simulations, we calculate energy, excess chemical potentials, bond-angle distributions and three-body correlations. This allows us to test the quality and physical meaning of RMC-generated results and its limitations. It also indicates the possibility to explore orientational correlations from simple scattering experiments. The new technique has been applied to bulk hard-sphere and Lennard-Jones systems and compared to standard Metropolis Monte Carlo results. (C) 1998 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work summarizes the HdHr group of Hermitian integration algorithms for dynamic structural analysis applications. It proposes a procedure for their use when nonlinear terms are present in the equilibrium equation. The simple pendulum problem is solved as a first example and the numerical results are discussed. Directions to be pursued in future research are also mentioned. Copyright (C) 2009 H.M. Bottura and A. C. Rigitano.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Capacitated Centered Clustering Problem (CCCP) consists of defining a set of p groups with minimum dissimilarity on a network with n points. Demand values are associated with each point and each group has a demand capacity. The problem is well known to be NP-hard and has many practical applications. In this paper, the hybrid method Clustering Search (CS) is implemented to solve the CCCP. This method identifies promising regions of the search space by generating solutions with a metaheuristic, such as Genetic Algorithm, and clustering them into clusters that are then explored further with local search heuristics. Computational results considering instances available in the literature are presented to demonstrate the efficacy of CS. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with approaches for sparse matrix substitutions using vector processing. Many publications have used the W-matrix method to solve the forward/backward substitutions on vector computer. Recently a different approach has been presented using dependency-based substitution algorithm (DBSA). In this paper the focus is on new algorithms able to explore the sparsity of the vectors. The efficiency is tested using linear systems from power systems with 118, 320, 725 and 1729 buses. The tests were performed on a CRAY Y MP2E/232. The speedups for a fast-forward/fast-backward using a 1729-bus system are near 19 and 14 for real and complex arithmetic operations, respectively. When forward/backward is employed the speedups are about 8 and 6 to perform the same simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a well-known interior point method (IPM) used to solve problems of linear programming that appear as sub-problems in the solution of the long-term transmission network expansion planning problem. The linear programming problem appears when the transportation model is used, and when there is the intention to solve the planning problem using a constructive heuristic algorithm (CHA), ora branch-and-bound algorithm. This paper shows the application of the IPM in a CHA. A good performance of the IPM was obtained, and then it can be used as tool inside algorithm, used to solve the planning problem. Illustrative tests are shown, using electrical systems known in the specialized literature. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents an extended genetic algorithm for solving the optimal transmission network expansion planning problem. Two main improvements have been introduced in the genetic algorithm: (a) initial population obtained by conventional optimisation based methods; (b) mutation approach inspired in the simulated annealing technique, the proposed method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Excellent performance is reported in the test results section of the paper for a difficult large-scale real-life problem: a substantial reduction in investment costs has been obtained with regard to previous solutions obtained via conventional optimisation methods and simulated annealing algorithms; statistical comparison procedures have been employed in benchmarking different versions of the genetic algorithm and simulated annealing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)