37 resultados para RANDOM OPTIMIZATION
Resumo:
A finite-element method is used to study the elastic properties of random three-dimensional porous materials with highly interconnected pores. We show that Young's modulus, E, is practically independent of Poisson's ratio of the solid phase, nu(s), over the entire solid fraction range, and Poisson's ratio, nu, becomes independent of nu(s) as the percolation threshold is approached. We represent this behaviour of nu in a flow diagram. This interesting but approximate behaviour is very similar to the exactly known behaviour in two-dimensional porous materials. In addition, the behaviour of nu versus nu(s) appears to imply that information in the dilute porosity limit can affect behaviour in the percolation threshold limit. We summarize the finite-element results in terms of simple structure-property relations, instead of tables of data, to make it easier to apply the computational results. Without using accurate numerical computations, one is limited to various effective medium theories and rigorous approximations like bounds and expansions. The accuracy of these equations is unknown for general porous media. To verify a particular theory it is important to check that it predicts both isotropic elastic moduli, i.e. prediction of Young's modulus alone is necessary but not sufficient. The subtleties of Poisson's ratio behaviour actually provide a very effective method for showing differences between the theories and demonstrating their ranges of validity. We find that for moderate- to high-porosity materials, none of the analytical theories is accurate and, at present, numerical techniques must be relied upon.
Resumo:
Recently, several groups have investigated quantum analogues of random walk algorithms, both on a line and on a circle. It has been found that the quantum versions have markedly different features to the classical versions. Namely, the variance on the line, and the mixing time on the circle increase quadratically faster in the quantum versions as compared to the classical versions. Here, we propose a scheme to implement the quantum random walk on a line and on a circle in an ion trap quantum computer. With current ion trap technology, the number of steps that could be experimentally implemented will be relatively small. However, we show how the enhanced features of these walks could be observed experimentally. In the limit of strong decoherence, the quantum random walk tends to the classical random walk. By measuring the degree to which the walk remains quantum, '' this algorithm could serve as an important benchmarking protocol for ion trap quantum computers.
Resumo:
Purpose: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. Methods: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; (V)over dot O-2peak = 64.5 +/- 5.2 mL.kg(-1).min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption ((V)over dotO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T-max) at their (V)over dotO(2peak) power output (P-max), as well as 3) a 40-kin time-trial (TT40). Subjects were matched and assigned to one of four training groups (G(1), N = 8, 8 X 60% T-max P-max, 1:2 work:recovery ratio; G(2), N = 9, 8 X 60% T-max at P-max, recovery at 65% HRmax; G(3), N = 10, 12 X 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1) G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. Results: All HIT groups improved TT40 performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to + 1.1 %; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their (V)over dot O-2peak significantly more than G(CON) (+ 1.0%; P < 0.05). Conclusion: The present study has shown that when HIT incorporates P-max as the interval intensity and 60% of T-max as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
A Combined Genetic Algorithm and Method of Moments design methods is presented for the design of unusual near-field antennas for use in Magnetic Resonance Imaging systems. The method is successfully applied to the design of an asymmetric coil structure for use at 190MHz and demonstrates excellent radiofrequency field homogeneity.