945 resultados para Anaerobic Threshold
Resumo:
Fatigue crack growth rate tests have been performed on Nimonic AP1, a powder formed Ni-base superalloy, in air and vacuum at room temperature. These show that threshold values are higher, and near-threshold (faceted) crack growth rates are lower, in vacuum than in air, although at high growth rates, in the “structure-insensitive” regime, R-ratio and a dilute environment have little effect. Changing the R-ratio from 0.1 to 0.5 in vacuum does not alter near-threshold crack growth rates very much, despite more extensive secondary cracking being noticeable at R= 0.5. In vacuum, rewelding occurs at contact points across the crack as ΔK falls. This leads to the production of extensive fracture surface damage and bulky fretting debris, and is thought to be a significant contributory factor to the observed increase in threshold values.
Resumo:
Fatigue crack propagation and threshold data for two Ni-base alloys, Astroloy and Nimonic 901, are reported. At room temperature the effect which altering the load ratio (R-ratio) has on fatigue behaviour is strongly dependent on grain size. In the coarse grained microstructures crack growth rates increase and threshold values decrease markedly as R rises from 0. 1 to 0. 8, whereas only small changes in behaviour occur in fine grained material. In Astroloy, when strength level and gamma grain size are kept constant, there is very little effect of processing route and gamma prime distribution on room temperature threshold and crack propagation results. The dominant microstructural effect on this type of fatigue behaviour is the matrix ( gamma ) grain size itself.
Resumo:
Threshold stress intensity values, ranging from ∼6 to 16 MN m −3/2 can be obtained in powder-formed Nimonic AP1 by changing the microstructure. The threshold and low crack growth rate behaviour at room temperature of a number of widely differing API microstructures, with both ‘necklace’ and fully recrystallized grain structures of various sizes and uniform and bimodal γ′-distributions, have been investigated. The results indicate that grain size is an important microstructural parameter which can control threshold behaviour, with the value of threshold stress intensity increasing with increasing grain size, but that the γ′-distribution is also important. In this Ni-base alloy, as in many others, near threshold fatigue crack growth occurs in a crystallographic manner along {111} planes. This is due to the development of a dislocation structure involving persistent slip bands on {111} planes in the plastic zone, caused by the presence of ordered shearable precipitates in the microstructure. However, as the stress intensity range is increased, a striated growth mode takes over. The results presented show that this transition from faceted to striated growth is associated with a sudden increase in crack propagation rate and occurs when the size of the reverse plastic zone at the crack tip becomes equal to the grain size, independent of any other microstructural variables.
Resumo:
The deliberate addition of Gaussian noise to cochlear implant signals has previously been proposed to enhance the time coding of signals by the cochlear nerve. Potentially, the addition of an inaudible level of noise could also have secondary benefits: it could lower the threshold to the information-bearing signal, and by desynchronization of nerve discharges, it could increase the level at which the information-bearing signal becomes uncomfortable. Both these effects would lead to an increased dynamic range, which might be expected to enhance speech comprehension and make the choice of cochlear implant compression parameters less critical (as with a wider dynamic range, small changes in the parameters would have less effect on loudness). The hypothesized secondary effects were investigated with eight users of the Clarion cochlear implant; the stimulation was analogue and monopolar. For presentations in noise, noise at 95% of the threshold level was applied simultaneously and independently to all the electrodes. The noise was found in two-alternative forced-choice (2AFC) experiments to decrease the threshold to sinusoidal stimuli (100 Hz, 1 kHz, 5 kHz) by about 2.0 dB and increase the dynamic range by 0.7 dB. Furthermore, in 2AFC loudness balance experiments, noise was found to decrease the loudness of moderate to intense stimuli. This suggests that loudness is partially coded by the degree of phase-locking of cochlear nerve fibers. The overall gain in dynamic range was modest, and more complex noise strategies, for example, using inhibition between the noise sources, may be required to get a clinically useful benefit. © 2006 Association for Research in Otolaryngology.
Resumo:
Architecture and learning algorithm of self-learning spiking neural network in fuzzy clustering task are outlined. Fuzzy receptive neurons for pulse-position transformation of input data are considered. It is proposed to treat a spiking neural network in terms of classical automatic control theory apparatus based on the Laplace transform. It is shown that synapse functioning can be easily modeled by a second order damped response unit. Spiking neuron soma is presented as a threshold detection unit. Thus, the proposed fuzzy spiking neural network is an analog-digital nonlinear pulse-position dynamic system. It is demonstrated how fuzzy probabilistic and possibilistic clustering approaches can be implemented on the base of the presented spiking neural network.
Resumo:
Background: Vigabatrin (VGB) is an anti-epileptic medication which has been linked to peripheral constriction of the visual field. Documenting the natural history associated with continued VGB exposure is important when making decisions about the risk and benefits associated with the treatment. Due to its speed the Swedish Interactive Threshold Algorithm (SITA) has become the algorithm of choice when carrying out Full Threshold automated static perimetry. SITA uses prior distributions of normal and glaucomatous visual field behaviour to estimate threshold sensitivity. As the abnormal model is based on glaucomatous behaviour this algorithm has not been validated for VGB recipients. We aim to assess the clinical utility of the SITA algorithm for accurately mapping VGB attributed field loss. Methods: The sample comprised one randomly selected eye of 16 patients diagnosed with epilepsy, exposed to VGB therapy. A clinical diagnosis of VGB attributed visual field loss was documented in 44% of the group. The mean age was 39.3 years∈±∈14.5 years and the mean deviation was -4.76 dB ±4.34 dB. Each patient was examined with the Full Threshold, SITA Standard and SITA Fast algorithm. Results: SITA Standard was on average approximately twice as fast (7.6 minutes) and SITA Fast approximately 3 times as fast (4.7 minutes) as examinations completed using the Full Threshold algorithm (15.8 minutes). In the clinical environment, the visual field outcome with both SITA algorithms was equivalent to visual field examination using the Full Threshold algorithm in terms of visual inspection of the grey scale plots, defect area and defect severity. Conclusions: Our research shows that both SITA algorithms are able to accurately map visual field loss attributed to VGB. As patients diagnosed with epilepsy are often vulnerable to fatigue, the time saving offered by SITA Fast means that this algorithm has a significant advantage for use with VGB recipients.
Resumo:
In this study, we developed a DEA-based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
Resumo:
It is often assumed (for analytical convenience, but also in accordance with common intuition) that consumer preferences are convex. In this paper, we consider circumstances under which such preferences are (or are not) optimal. In particular, we investigate a setting in which goods possess some hidden quality with known distribution, and the consumer chooses a bundle of goods that maximizes the probability that he receives some threshold level of this quality. We show that if the threshold is small relative to consumption levels, preferences will tend to be convex; whereas the opposite holds if the threshold is large. Our theory helps explain a broad spectrum of economic behavior (including, in particular, certain common commercial advertising strategies), suggesting that sensitivity to information about thresholds is deeply rooted in human psychology.
Resumo:
A hydrodynamic threshold between Darcian and non-Darcian flow conditions was found to occur in cubes of Key Largo Limestone from Florida, USA (one cube measuring 0.2 m on each side, the other 0.3 m) at an effective porosity of 33% and a hydraulic conductivity of 10 m/day. Below these values, flow was laminar and could be described as Darcian. Above these values, hydraulic conductivity increased greatly and flow was non-laminar. Reynolds numbers (Re) for these experiments ranged from
Resumo:
A gap exists in the knowledge of acute dehydration and its effect on anaerobic muscular power. Therefore the purpose of this study was to examine the effects of active dehydration by exercise in a hot humid environment on anaerobic muscular power.
Resumo:
Fixed-step-size (FSS) and Bayesian staircases are widely used methods to estimate sensory thresholds in 2AFC tasks, although a direct comparison of both types of procedure under identical conditions has not previously been reported. A simulation study and an empirical test were conducted to compare the performance of optimized Bayesian staircases with that of four optimized variants of FSS staircase differing as to up-down rule. The ultimate goal was to determine whether FSS or Bayesian staircases are the best choice in experimental psychophysics. The comparison considered the properties of the estimates (i.e. bias and standard errors) in relation to their cost (i.e. the number of trials to completion). The simulation study showed that mean estimates of Bayesian and FSS staircases are dependable when sufficient trials are given and that, in both cases, the standard deviation (SD) of the estimates decreases with number of trials, although the SD of Bayesian estimates is always lower than that of FSS estimates (and thus, Bayesian staircases are more efficient). The empirical test did not support these conclusions, as (1) neither procedure rendered estimates converging on some value, (2) standard deviations did not follow the expected pattern of decrease with number of trials, and (3) both procedures appeared to be equally efficient. Potential factors explaining the discrepancies between simulation and empirical results are commented upon and, all things considered, a sensible recommendation is for psychophysicists to run no fewer than 18 and no more than 30 reversals of an FSS staircase implementing the 1-up/3-down rule.
Resumo:
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.