189 resultados para Computational time
Resumo:
Latitudinal clines provide natural systems that may allow the effect of natural selection on the genetic variance to be determined. Ten clinal populations of Drosophila serrata collected from the eastern coast of Australia were used to examine clinal patterns in the trait mean and genetic variance of the life-history trait egg-to-adult development time. Development time significantly lengthened from tropical areas to temperate areas. The additive genetic variance for development time in each population was not associated with latitude but was associated with the population mean development time. Additive genetic variance tended to be larger in populations with more extreme development times and appeared to be consistent with allele frequency change. In contrast, the nonadditive genetic variance was not associated with the population mean but was associated with latitude. Levels of nonadditive genetic variance were greatest in the region of the cline where the gradient in the change in mean was greatest, consistent with Barton's (1999) conjecture that the generation of linkage disequilibrium may become an important component of the genetic variance in systems with a spatially varying optimum.
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
In this paper we propose a novel fast and linearly scalable method for solving master equations arising in the context of gas-phase reactive systems, based on an existent stiff ordinary differential equation integrator. The required solution of a linear system involving the Jacobian matrix is achieved using the GMRES iteration preconditioned using the diffusion approximation to the master equation. In this way we avoid the cubic scaling of traditional master equation solution methods and maintain the low temperature robustness of numerical integration. The method is tested using a master equation modelling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
This paper presents a review of the time-domain polarization measurement techniques for the condition assessment of aged transformer insulation. The polarization process is first described with appropriate dielectric response theories and then commonly used polarization methods are described with special emphasis on the most widely used return voltage(rv) measurement. Most recent emphasis has been directed to techniques of determining moisture content of insulation indirectly by measuring rv parameters. The major difficulty still lies with the accurate interpretation of return voltage results. This paper investigates different thoughts regarding the interpretation of rv results for different moisture and ageing conditions. Other time domain polarization measurement techniques and their results are also presented in this paper.
Resumo:
BACKGROUND: Increasing levels of physical inactivity and sedentariness are contributing to the current overweight and obesity epidemic. In this paper, the findings of two recent studies are used to explore the relationships between sitting time ( in transport, work and leisure), physical activity and body mass index (BMI) in two contrasting samples of adult Australians. METHODS: Data on sitting time, physical activity, BMI and a number of demographic characteristics were compared for participants in two studies-529 women who were participants in a preschool health promotion project ('mothers'), and 185 men and women who were involved in a workplace pedometer study ('workers'). Relationships between age, number of children, physical activity, sitting time, BMI, gender and work patterns were explored. Logistic regression was used to predict the likelihood of being overweight or obese, among participants with different physical activity, sitting time and work patterns. RESULTS: The total reported time spent sitting per day ( across all domains) was almost 6 h less among the mothers than the workers (P
Resumo:
The aim of this study was to compare the effects of two high-intensity, treadmill interval-training programs on 3000-m and 5000-m running performance. Maximal oxygen uptake ((V) over dot O-2max), the running speed associated with (V) over dot O-2max (nu (V) over dot O-2max), the time for which nu (V) over dot O-2max can be maintained (T-max), running economy (RE), ventilatory threshold (VT) and 3000-m and 5000-m running times were determined in 27 well-trained runners. Subjects were then randomly assigned to three groups; (1) 60% T-max (2) 70% T-max and (3) control. Subjects in the control group continued their normal training and subjects in the two T-max groups undertook a 4-week treadmill interval-training program with the intensity set at nu (V) over dot O-2max and the interval duration at the assigned T-max. These subjects completed two interval-training sessions per week (60% T-max = six intervals/session, 70% T-max group = five intervals/session). Subjects were re-tested on all parameters at the completion of the training program. There was a significant improvement between pre- and post-training values in 3000-m time trial (TT) performance in the 60% T-max group compared to the 70% T,,a, and control groups [mean (SE); 60% T-max = 17.6 (3.5) s, 70% T-max = 6.3 (4.2) s, control = 0.5 (7.7) s]. There was no significant effect of the training program on 5000-m TT performance [60% T-max = 25.8 (13.8) s, 70% T-max = 3.7 (11.6) s, control = 9.9 (13.1) s]. Although there were no significant improvements in (V) over dot O-2max, nu (V) over dot (2max) and RE between groups, changes in (V) over dot O-2max and RE were significantly correlated with the improvement in the 3000-m TT. Furthermore, VT and T-max were significantly higher in the 60% Tmax group post-compared to pre-training. In conclusion, 3000-m running performance can be significantly improved in a group of well-trained runners, using a 4-week treadmill interval training program at nu (V) over dot O-2max with interval durations of 60% T-max.