19 resultados para Weighted average power tests

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Few studies have focused on the metabolic responses to alternating high- and low-intensity exercise and, specifically, compared these responses to those seen during constant-load exercise performed at the same average power output. This study compared muscle metabolic responses between two patterns of exercise during which the intensity was either constant and just below critical power (CP) or that oscillated above and below CP. Six trained males (mean +/- SD age 23.6 +/- 2.6 y) completed two 30-minute bouts of cycling (alternating and constant) at an average intensity equal to 90% of CR The intensity during alternating exercise varied between 158% CP and 73% CP. Biopsy samples from the vastus lateralis muscle were taken before (PRE), at the midpoint and end (POST) of exercise and analysed for glycogen, lactate, PCr and pH. Although these metabolic variables in muscle changed significantly during both patterns of exercise, there were no significant differences (p > 0.05) between constant and alternating exercise for glycogen (PRE: 418.8 +/- 85 vs. 444.3 +/- 70; POST: 220.5 +/- 59 vs. 259.5 +/- 126mmol.kg(-1) dw), lactate (PRE: 8.5 +/- 7.7 vs. 8.5 +/- 8.3; POST: 49.9 +/- 19.0 vs. 42.6 +/- 26.6 mmol.kg(-1)dw), phosphocreatine (PRE: 77.9 +/- 11.6 vs. 75.7 +/- 16.9; POST: 65.8 +/- 12.1 vs. 61.2 +/- 12.7mmol.kg(-1)dw) or pH (PRE: 6.99 +/- 0.12 vs. 6.99 +/- 0.08; POST: 6.86 +/- 0.13 vs. 6.85 +/- 0.06), respectively. There were also no significant differences in blood lactate responses to the two patterns of exercise. These data suggest that, when the average power output is similar, large variations in exercise intensity exert no significant effect on muscle metabolism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new low-complexity multicarrier modulation (MCM) technique based on lattices which achieves a peak-to-average power ratio (PAR) as low as three. The scheme can be viewed as a drop in replacement for the discrete multitone (DMT) modulation of an asymmetric digital subscriber line modem. We show that the lattice-MCM retains many of the attractive features of sinusoidal-MCM, and does so with lower implementation complexity, O(N), compared with DMT, which requires O(N log N) operations. We also present techniques for narrowband interference rejection and power profiling. Simulation studies confirm that performance of the lattice-MCM is superior, even compared with recent techniques for PAR reduction in DMT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the study was to perform a genetic linkage analysis for eye color, for comparative data. Similarity in eye color of mono- and dizygotic twins was rated by the twins' mother, their father and/or the twins themselves. For 4748 twin pairs the similarity in eye color was available on a three point scale (not at all alike-somewhat alike-completely alike), absolute eye color on individuals was not assessed. The probability that twins were alike for eye color was calculated as a weighted average of the different responses of all respondents on several different time points. The mean probability of being alike for eye color was 0.98 for MZ twins (2167 pairs), whereas the mean probability for DZ twins was 0.46 (2537 pairs), suggesting very high heritability for eye color. For 294 DZ twin pairs genome-wide marker data were available. The probability of being alike for eye color was regressed on the average amount of IBD sharing. We found a peak LOD-score of 2.9 at chromosome 15q, overlapping with the region recently implicated for absolute ratings of eye color in Australian twins [Zhu, G., Evans, D. M., Duffy, D. L., Montgomery, G. W., Medland, S. E., Gillespie, N. A., Ewen, K. R., Jewell, M., Liew, Y. W., Hayward, N. K., Sturm, R. A., Trent, J. M., and Martin, N. G. (2004). Twin Res. 7:197-210] and containing the OCA2 gene, which is the major candidate gene for eye color [Sturm, R. A. Teasdale, R. D, and Box, N. F. (2001). Gene 277:49-62]. Our results demonstrate that comparative measures on relatives can be used in genetic linkage analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Occupational standards concerning allowable concentrations of chemical compounds in the ambient air of workplaces have been established in several countries worldwide. With the integration of the European Union (EU), there has been a need of establishing harmonised Occupational Exposure Limits (OEL). The European Commission Directive 95/320/EC of 12 July 1995 has given the tasks to a Scientific Committee for Occupational Exposure Limits (SCOEL) to propose, based on scientific data and where appropriate, occupational limit values which may include the 8-h time-weighted average (TWA), short-term limits/excursion limits (STEL) and Biological Limit Values (BLVs). In 2000, the European Union issued a list of 62 chemical substances with Occupational Exposure Limits. Of these, 25 substances received a skin notation, indicating that toxicologically significant amounts may be taken up via the skin. For such substances, monitoring of concentrations in ambient air may not be sufficient, and biological monitoring strategies appear of potential importance in the medical surveillance of exposed workers. Recent progress has been made with respect to formulation of a strategy related to health-based BLVs. (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The texture segmentation techniques are diversified by the existence of several approaches. In this paper, we propose fuzzy features for the segmentation of texture image. For this purpose, a membership function is constructed to represent the effect of the neighboring pixels on the current pixel in a window. Using these membership function values, we find a feature by weighted average method for the current pixel. This is repeated for all pixels in the window treating each time one pixel as the current pixel. Using these fuzzy based features, we derive three descriptors such as maximum, entropy, and energy for each window. To segment the texture image, the modified mountain clustering that is unsupervised and fuzzy c-means clustering have been used. The performance of the proposed features is compared with that of fractal features.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study has three main objectives. First, it develops a generalization of the commonly used EKS method to multilateral price comparisons. It is shown that the EKS system can be generalized so that weights can be attached to each of the link comparisons used in the EKS computations. These weights can account for differing levels of reliability of the underlying binary comparisons. Second, various reliability measures and corresponding weighting schemes are presented and their merits discussed. Finally, these new methods are applied to an international data set of manufacturing prices from the ICOP project. Although theoretically superior, it appears that the empirical impact of the weighted EKS method is generally small compared to the unweighted EKS. It is also found that this impact is larger when it is applied at lower levels of aggregation. Finally, the importance of using sector specific PPPs in assessing relative levels of manufacturing productivity is indicated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: Although manufacturers of bicycle power monitoring devices SRM and Power Tap (PT) claim accuracy to within 2.5%, there are limited scientific data available in support. The purpose of this investigation was to assess the accuracy of SRM and PT under different conditions. Methods: First, 19 SRM were calibrated, raced for 11 months, and retested using a dynamic CALRIG (50-1000 W at 100 rpm). Second, using the same procedure, five PT were repeat tested on alternate days. Third, the most accurate SRM and PT were tested for the influence of cadence (60, 80, 100, 120 rpm), temperature (8 and 21degreesC) and time (1 h at similar to300 W) on accuracy. Finally, the same SRM and PT were downloaded and compared after random cadence and gear surges using the CALRIG and on a training ride. Results: The mean error scores for SRM and PT factory calibration over a range of 50-1000 W were 2.3 +/- 4.9% and -2.5 +/- 0.5%, respectively. A second set of trials provided stable results for 15 calibrated SRM after 11 months (-0.8 +/- 1.7%), and follow-up testing of all PT units confirmed these findings (-2.7 +/- 0.1%). Accuracy for SRM and PT was not largely influenced by time and cadence; however. power output readings were noticeably influenced by temperature (5.2% for SRM and 8.4% for PT). During field trials, SRM average and max power were 4.8% and 7.3% lower, respectively, compared with PT. Conclusions: When operated according to manufacturers instructions, both SRM and PT offer the coach, athlete, and sport scientist the ability to accurately monitor power output in the lab and the field. Calibration procedures matching performance tests (duration, power, cadence, and temperature) are, however, advised as the error associated with each unit may vary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power output achieved at peak oxygen consumption (VO2 peak) and the time this power can be maintained (i.e., Tmax) have been used in prescribing high-intensity interval training. In this context, the present study examined temporal aspects of the VO2 response to exercise at the cycling power that output well trained cyclists achieve their VO2 peak (i.e., Pmax). Following a progressive exercise test to determine VO2 peak, 43 well trained male cyclists (M age = 25 years, SD = 6; M mass = 75 kg SD = 7; M VO2 peak = 64.8 ml(.)kg(1.)min(-1), SD = 5.2) performed two Tmax tests 1 week apart.1. Values expressed for each participant are means and standard deviations of these two tests. Participants achieved a mean VO2 peak during the Tmax test after 176 s (SD = 40; = 74% of Tmax, SD = 12) and maintained it for 66 s (SD = 39; M = 26% of Tmax, SD = 12). Additionally they obtained mean 95 % of VO2 peak after 147 s (SD = 31; M = 62 % of Tmax, SD = 8) and maintained it for 95 s (SD = 38; M = 38 % of Tmax, SD = 8). These results suggest that 60-70% of Tmax is an appropriate exercise duration for a population of well trained cyclists to attain VO2 peak during exercise at Pmax. However due to intraparticipant variability in the temporal aspects of the VO2 response to exercise at Pmax, future research is needed to examine whether individual high-intensity interval training programs for well trained endurance athletes might best be prescribed according to an athlete's individual VO2 response to exercise at Pmax.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to.. the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.