939 resultados para Dynamic Threshold Algorithm
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
Study design: Single-blind, placebo control, randomized, crossover, experimental Study with repeated measures, Objective: To determine the initial effects of a taping technique on grip strength and pain in individuals with lateral epicondylalgia. Background: Taping techniques are advocated for chronic musculoskeletal conditions such as lateral epicondylalgia, a prevalent disorder with significant impact on the individual and community. Little evidence exists supporting the effects of taping techniques on musculoskeletal pain. Methods and Measures: Sixteen participants (mean age +/- SD, 45.8 +/- 10.2 years) with chronic lateral epicondylalgia (rnean duration +/- SD, 13.1 +/- 9.9 months) participated in a placebo control study of an elbow taping technique. Outcome measures were pain-free grip strength and pressure pain threshold taken before, immediately after, and 30 minutes after application of tape. Results: The taping technique significantly improved pain-free grip strength by 24% from baseline (P = .028). The treatment effect was greater than that for placebo and control conditions. Changes in pressure pain threshold (19%), although positive, were not statistically significant. Conclusion: This preliminary study demonstrated an initial ameliorative effect of a taping technique for lateral epicondylalgia and suggests that it should be considered as an adjunct in the management of this condition.
Resumo:
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Water wetting is a crucial issue in carbon dioxide (CO.) corrosion of multiphase flow pipelines made from mild steel. This study demonstrates the use of a novel benchtop apparatus, a horizontal rotating cylinder, to study the effect of water wetting on CO2 corrosion of mild steel in two-phase flow. The setup is similar to a standard rotating cylinder except for its horizontal orientation and the presence of two phases-typically water and oil. The apparatus has been tested by using mass-transfer measurements and CO2 corrosion measurements in single-phase water flow. CO2 corrosion measurements were subsequently performed using a water/hexane mixture with water cuts varying between 5% and 50%. While the metal surface was primarily hydrophilic under stagnant. conditions, a variety of dynamic water wetting situations was encountered as the water cut and fluid velocity were altered. Threshold velocities were identified at various water cuts when the surface became oil-wet and corrosion stopped.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
A Combined Genetic Algorithm and Method of Moments design methods is presented for the design of unusual near-field antennas for use in Magnetic Resonance Imaging systems. The method is successfully applied to the design of an asymmetric coil structure for use at 190MHz and demonstrates excellent radiofrequency field homogeneity.
Resumo:
This paper conducts a dynamic stability analysis of symmetrically laminated FGM rectangular plates with general out-of-plane supporting conditions, subjected to a uniaxial periodic in-plane load and undergoing uniform temperature change. Theoretical formulations are based on Reddy's third-order shear deformation plate theory, and account for the temperature dependence of material properties. A semi-analytical Galerkin-differential quadrature approach is employed to convert the governing equations into a linear system of Mathieu-Hill equations from which the boundary points on the unstable regions are determined by Bolotin's method. Free vibration and bifurcation buckling are also discussed as subset problems. Numerical results are presented in both dimensionless tabular and graphical forms for laminated plates with FGM layers made of silicon nitride and stainless steel. The influences of various parameters such as material composition, layer thickness ratio, temperature change, static load level, boundary constraints on the dynamic stability, buckling and vibration frequencies are examined in detail through parametric studies.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Borderline hypertension (BH) has been associated with an exaggerated blood pressure (BP) response during laboratory stressors. However, the incidence of target organ damage in this condition and its relation to BP hyperreactivity is an unsettled issue. Thus, we assessed the Doppler echocardiographic profile of a group of BH men (N = 36) according to office BP measurements with exaggerated BP in the cycloergometric test. A group of normotensive men (NT, N = 36) with a normal BP response during the cycloergometric test was used as control. To assess vascular function and reactivity, all subjects were submitted to the cold pressor test. Before Doppler echocardiography, the BP profile of all subjects was evaluated by 24-h ambulatory BP monitoring. All subjects from the NT group presented normal monitored levels of BP. In contrast, 19 subjects from the original BH group presented normal monitored BP levels and 17 presented elevated monitored BP levels. In the NT group all Doppler echocardiographic indexes were normal. All subjects from the original BH group presented normal left ventricular mass and geometrical pattern. However, in the subjects with elevated monitored BP levels, fractional shortening was greater, isovolumetric relaxation time longer, and early to late flow velocity ratio was reduced in relation to subjects from the original BH group with normal monitored BP levels (P<0.05). These subjects also presented an exaggerated BP response during the cold pressor test. These results support the notion of an integrated pattern of cardiac and vascular adaptation during the development of hypertension.
Resumo:
Utilizar robôs autônomos capazes de planejar o seu caminho é um desafio que atrai vários pesquisadores na área de navegação de robôs. Neste contexto, este trabalho tem como objetivo implementar um algoritmo PSO híbrido para o planejamento de caminhos em ambientes estáticos para veículos holonômicos e não holonômicos. O algoritmo proposto possui duas fases: a primeira utiliza o algoritmo A* para encontrar uma trajetória inicial viável que o algoritmo PSO otimiza na segunda fase. Por fim, uma fase de pós planejamento pode ser aplicada no caminho a fim de adaptá-lo às restrições cinemáticas do veículo não holonômico. O modelo Ackerman foi considerado para os experimentos. O ambiente de simulação de robótica CARMEN (Carnegie Mellon Robot Navigation Toolkit) foi utilizado para realização de todos os experimentos computacionais considerando cinco instâncias de mapas geradas artificialmente com obstáculos. O desempenho do algoritmo desenvolvido, A*PSO, foi comparado com os algoritmos A*, PSO convencional e A* Estado Híbrido. A análise dos resultados indicou que o algoritmo A*PSO híbrido desenvolvido superou em qualidade de solução o PSO convencional. Apesar de ter encontrado melhores soluções em 40% das instâncias quando comparado com o A*, o A*PSO apresentou trajetórias com menos pontos de guinada. Investigando os resultados obtidos para o modelo não holonômico, o A*PSO obteve caminhos maiores entretanto mais suaves e seguros.
Resumo:
O Dynamic Gait Index (DGI) é um teste que avalia o equilíbrio e marcha do corpo humano. OBJETIVOS: Os objetivos deste estudo foram adaptar culturalmente o DGI para o português e avaliar a sua confiabilidade. MATERIAL E MÉTODO: Seguiu-se o método de Guillemin et al. (1993) para a adaptação cultural do instrumento. Trata-se de estudo prospectivo em que 46 pacientes foram avaliados na fase de adaptação cultural e os itens que apresentaram 20% ou mais de incompreensão foram reformulados e reaplicados. A versão final do DGI em português foi aplicada em 35 idosos para examinar a confiabilidade intra e inter-observadores. O coeficiente de Spearman foi utilizado para correlacionar os escores inter e intra-observador e o teste de Wilcoxon para comparar as pontuações. A consistência interna foi analisada pelo coeficiente alfa de Cronbach. RESULTADOS: Houve correlações estatisticamente significantes entre os escores obtidos às avaliações inter e intra-observadores para todos os itens (p<0,001), classificadas como boa a muito forte (com de variação de r=0,655 a r=0,951). O DGI mostrou alta consistência interna entre seus itens nas avaliações inter e intra-observadores (variação de µ ou = 0,820 a a=0,894). CONCLUSÃO: O DGI foi adaptado culturalmente para o português brasileiro, mostrando-se um instrumento confiável.
Resumo:
Many organisations need to extract useful information from huge amounts of movement data. One example is found in maritime transportation, where the automated identification of a diverse range of traffic routes is a key management issue for improving the maintenance of ports and ocean routes, and accelerating ship traffic. This paper addresses, in a first stage, the research challenge of developing an approach for the automated identification of traffic routes based on clustering motion vectors rather than reconstructed trajectories. The immediate benefit of the proposed approach is to avoid the reconstruction of trajectories in terms of their geometric shape of the path, their position in space, their life span, and changes of speed, direction and other attributes over time. For clustering the moving objects, an adapted version of the Shared Nearest Neighbour algorithm is used. The motion vectors, with a position and a direction, are analysed in order to identify clusters of vectors that are moving towards the same direction. These clusters represent traffic routes and the preliminary results have shown to be promising for the automated identification of traffic routes with different shapes and densities, as well as for handling noise data.