119 resultados para Shannon’s measure of uncertainty
Resumo:
Our first study develops a measure of appetitive motivation and our second study compares several measures of Gray's (1987) behaviour activation system (BAS) in the prediction of the surface scales of personality. In particular, we were interested in determining the utility of the new appetitive motivation scale and Dickman's functional impulsivity scale. In comparison to other well-known measures, both scales were generally good predictors. We conclude that the appetitive motivation scale is a promising measure of BAS based upon construct validation. Contrary to previous studies which have suggested that BAS is a generally poor predictor of the surface scales of personality, we discovered appetitive motivation to be an important predictor of personality in general. Interestingly, the scale was also predictive of scores on the Baddeley reasoning test. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Stochastic simulation is a recognised tool for quantifying the spatial distribution of geological uncertainty and risk in earth science and engineering. Metals mining is an area where simulation technologies are extensively used; however, applications in the coal mining industry have been limited. This is particularly due to the lack of a systematic demonstration illustrating the capabilities these techniques have in problem solving in coal mining. This paper presents two broad and technically distinct areas of applications in coal mining. The first deals with the use of simulation in the quantification of uncertainty in coal seam attributes and risk assessment to assist coal resource classification, and drillhole spacing optimisation to meet pre-specified risk levels at a required confidence. The second application presents the use of stochastic simulation in the quantification of fault risk, an area of particular interest to underground coal mining, and documents the performance of the approach. The examples presented demonstrate the advantages and positive contribution stochastic simulation approaches bring to the coal mining industry
Resumo:
Study Design. A multicenter, randomized controlled trial with unblinded treatment and blinded outcome assessment was conducted. The treatment period was 6 weeks with follow-up assessment after treatment, then at 3, 6, and 12 months. Objectives. To determine the effectiveness of manipulative therapy and a low-load exercise program for cervicogenic headache when used alone and in combination, as compared with a control group. Summary of Background Data. Headaches arising from cervical musculoskeletal disorders are common. Conservative therapies are recommended as the first treatment of choice. Evidence for the effectiveness of manipulative therapy is inconclusive and available only for the short term. There is no evidence for exercise, and no study has investigated the effect of combined therapies for cervicogenic headache. Methods. In this study, 200 participants who met the diagnostic criteria for cervicogenic headache were randomized into four groups: manipulative therapy group, exercise therapy group, combined therapy group, and a control group. The primary outcome was a change in headache frequency. Other outcomes included changes in headache intensity and duration, the Northwick Park Neck Pain Index, medication intake, and patient satisfaction. Physical outcomes included pain on neck movement, upper cervical joint tenderness, a craniocervical flexion muscle test, and a photographic measure of posture. Results. There were no differences in headache-related and demographic characteristics between the groups at baseline. The loss to follow-up evaluation was 3.5%. At the 12-month follow-up assessment, both manipulative therapy and specific exercise had significantly reduced headache frequency and intensity, and the neck pain and effects were maintained (P < 0.05 for all). The combined therapies was not significantly superior to either therapy alone, but 10% more patients gained relief with the combination. Effect sizes were at least moderate and clinically relevant. Conclusion. Manipulative therapy and exercise can reduce the symptoms of cervicogenic headache, and the effects are maintained.
Resumo:
Physical education, now often explicitly identified with health in contemporary school curricula, continues to be implicated in the (re)production of the 'cult of the body'. We argue that HPE is a form of health promotion that attempts to 'make' healthy citizens of young people in the context of the 'risk society'. In our view there is still work to be done in understanding how and why physical education (as HPE) continues to be implicated in the reproduction of values associated with the cult of body. We are keen to understand why HPE continues to be ineffective in helping young people gain some measure of analytic and embodied 'distance' from the problematic aspects of the cult of the body. This paper offers an analysis of this enduring issue by using some contemporary analytic discourses including 'governmentality', 'risk society' and the 'new public health'.
Resumo:
The differences in spectral shape resolution abilities among cochlear implant ~CI! listeners, and between CI and normal-hearing ~NH! listeners, when listening with the same number of channels ~12!, was investigated. In addition, the effect of the number of channels on spectral shape resolution was examined. The stimuli were rippled noise signals with various ripple frequency-spacings. An adaptive 4IFC procedure was used to determine the threshold for resolvable ripple spacing, which was the spacing at which an interchange in peak and valley positions could be discriminated. The results showed poorer spectral shape resolution in CI compared to NH listeners ~average thresholds of approximately 3000 and 400 Hz, respectively!, and wide variability among CI listeners ~range of approximately 800 to 8000 Hz!. There was a significant relationship between spectral shape resolution and vowel recognition. The spectral shape resolution thresholds of NH listeners increased as the number of channels increased from 1 to 16, while the CI listeners showed a performance plateau at 4–6 channels, which is consistent with previous results using speech recognition measures. These results indicate that this test may provide a measure of CI performance which is time efficient and non-linguistic, and therefore, if verified, may provide a useful contribution to the prediction of speech perception in adults and children who use CIs.
Resumo:
Nearest–neighbour balance is considered a desirable property for an experiment to possess in situations where experimental units are influenced by their neighbours. This paper introduces a measure of the degree of nearest–neighbour balance of a design. The measure is used in an algorithm which generates nearest–neighbour balanced designs and is readily modified to obtain designs with various types of nearest–neighbour balance. Nearest–neighbour balanced designs are produced for a wide class of parameter settings, and in particular for those settings for which such designs cannot be found by existing direct combinatorial methods. In addition, designs with unequal row and column sizes, and designs with border plots are constructed using the approach presented here.
Fast Structure-Based Assignment of 15N HSQC Spectra of Selectively 15N-Labeled Paramagnetic Proteins
Resumo:
A novel strategy for fast NMR resonance assignment of N-15 HSQC spectra of proteins is presented. It requires the structure coordinates of the protein, a paramagnetic center, and one or more residue-selectively N-15-labeled samples. Comparison of sensitive undecoupled N-15 HSQC spectra recorded of paramagnetic and diamagnetic samples yields data for every cross-peak on pseudocontact shift, paramagnetic relaxation enhancement, cross-correlation between Curie-spin and dipole-dipole relaxation, and residual dipolar coupling. Comparison of these four different paramagnetic quantities with predictions from the three-dimensional structure simultaneously yields the resonance assignment and the anisotropy of the susceptibility tensor of the paramagnetic center. The method is demonstrated with the 30 kDa complex between the N-terminal domain of the epsilon subunit and the theta subunit of Escherichia Coll DNA polymerase III. The program PLATYPUS was developed to perform the assignment, provide a measure of reliability of the assignment, and determine the susceptibility tensor anisotropy.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
Multi-frequency bioimpedance analysis (MFBIA) was used to determine the impedance, reactance and resistance of 103 lamb carcasses (17.1-34.2 kg) immediately after slaughter and evisceration. Carcasses were halved, frozen and one half subsequently homogenized and analysed for water, crude protein and fat content. Three measures of carcass length were obtained. Diagonal length between the electrodes (right side biceps femoris to left side of neck) explained a greater proportion of the variance in water mass than did estimates of spinal length and was selected for use in the index L-2/Z to predict the mass of chemical components in the carcass. Use of impedance (Z) measured at the characteristic frequency (Z(c)) instead of 50 kHz (Z(50)) did not improve the power of the model to predict the mass of water, protein or fat in the carcass. While L-2/Z(50) explained a significant proportion of variation in the masses of body water (r(2) 0.64), protein (r(2) 0.34) and fat (r(2) 0.35), its inclusion in multi-variate indices offered small or no increases in predictive capacity when hot carcass weight (HCW) and a measure of rib fat-depth (GR) were present in the model. Optimized equations were able to account for 65-90 % of the variance observed in the weight of chemical components in the carcass. It is concluded that single frequency impedance data do not provide better prediction of carcass composition than can be obtained from measures of HCW and GR. Indices of intracellular water mass derived from impedance at zero frequency and the characteristic frequency explained a similar proportion of the variance in carcass protein mass as did the index L-2/Z(50).
Resumo:
Bioelectrical impedance analysis (BIA) offers the potential for a simple, portable and relatively inexpensive technique for the in vivo measurement of total body water (TBW). The potential of BIA as a technique of body composition analysis is even greater when one considers that body water can be used as a surrogate measure of lean body mass. However, BIA has not found universal acceptance even with the introduction of multi-frequency BIA (MFBIA) which, potentially, may improve the predictive accuracy of the measurement. There are a number of reasons for this lack of acceptance, although perhaps the major reason is that no single algorithm has been developed which can be applied to all subject groups. This may be due, in part, to the commonly used wrist-to-ankle protocol which is not indicated by the basic theory of bioimpedance, where the body is considered as five interconnecting cylinders. Several workers have suggested the use of segmental BIA measurements to provide a protocol more in keeping with basic theory. However, there are other difficulties associated with the application of BIA, such as effects of hydration and ion status, posture and fluid distribution. A further putative advantage of MFBIA is the independent assessment not only of TBW but also of the extracellular fluid volume (ECW), hence heralding the possibility of,being able to assess the fluid distribution between these compartments. Results of studies in this area have been, to date, mixed. Whereas strong relationships of impedance values at low frequencies with ECW, and at high frequencies with TBW, have been reported, changes in impedance are not always well correlated with changes in the size of the fluid compartments (assessed by alternative and more direct means) in pathological conditions. Furthermore, the theoretical advantages of Cole-Cole modelling over selected frequency prediction have not always been apparent. This review will consider the principles, methodology and applications of BIA. The principles and methodology will,be considered in relation to the basic theory of BIA and difficulties experienced in its application. The relative merits of single and multiple frequency BIA will be addressed, with particular attention to the latter's role in the assessment of compartmental fluid volumes. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
The common approach of bioelectrical impedance analysis to estimate body water uses a wrist-to-ankle methodology which, although not indicated by theory, has the advantage of ease of application particularly for clinical studies involving patients with debilitating diseases. A number of authors have suggested the use of a segmental protocol in which the impedances of the trunk and limbs are measured separately to provide a methodology more in keeping with basic theory. The segmental protocol hits not, however, been generally adopted, partly because of the increased complexity involved in its application, and partly because studies comparing the two methodologies have not clearly demonstrated a significant improvement from the segmental methodology. We have conducted a small pilot study involving ten subjects to investigate the efficacy of the two methodologies in a group of normal subjects. The study did not require the independent measure of body water, by for example isotope dilution, as the subjects were maintained in a state of constant hydration with only the distribution between limbs and trunk changing as a result of change in posture. The results demonstrate a significant difference between the two methodologies in predicting the expected constancy of body water in this study, with the segmental methodology indicating a mean percentage change in extracellular water of -2.2%; which was not significantly different from the expected null result, whereas the wrist-to-ankle methodology indicated a mean percentage change in extracellular water of -6.6%. This is significantly different from the null result, and from the value obtained from the segmental methodology (p = 0.006). Similar results were obtained using estimates of total body water from the two methodologies. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Transpiration efficiency, W, the ratio of plant carbon produced to water transpired and carbon isotope discrimination of leaf dry matter, Delta(d)' were measured together on 30 lines of the C-4 species, Sorghum bicolor in the glasshouse and on eight lines grown in the field. In the glasshouse, the mean W observed was 4.9 mmol C mol(-1) H2O and the range was 0.8 mmol C mol(-1) H2O The mean Delta(d) was 3.0 parts per thousand and the observed range was 0.4 parts per thousand. In the field, the mean W was lower at 2.8 mmol C mol H2O and the mean Delta(d) was 4.6 parts per thousand. Significant positive correlations between W and Delta(d) were observed for plants grown in the glasshouse and in the field. The observed correlations were consistent with theory, opposite to those for C-4 species, and showed that variation in Delta(d) was an integrated measure of long-term variation in the ratio of intercellular to ambient CO2 partial pressure, p(i)/p(a). Detailed gas exchange measurements of carbon isotope discrimination during CO2 uptake, Delta(A) and p(i)/p(a) were made on leaves of eight S. bicolor lines. The observed relationship between Delta(A) and p(i)/p(a) was linear with a negative slope of 3.7 parts per thousand in Delta(A) for a unit change in p(i)/p(a). The slope of this linear relationship between Delta(A) and p(i)/p(a) in C-4 species is dependent on the leakiness of the CO2 concentrating mechanism of the C pathway, We estimated the leakiness (defined as the fraction of CO2 released in the bundle sheath by C-4 acid decarboxylations, which is lost by leakage) to be 0.2. We conclude that, although variation in Delta(d) observed in the 30 lines of S. bicolor is smaller than that commonly observed in C-4 species, it also reflects variation in transpiration efficiency, W. Among the eight lines examined in detail and in the environments used, there was considerable genotype x environment interaction.
Resumo:
The present study examined the effects of work control and job demands on employee adjustment and work performance using a multidimensional measure of work control (assessing levels of task control, decision control and work scheduling control). It was proposed that the negative effects of job demands and employee adjustment would be moderated by high levels of task control. It was also proposed that there would be evidence of main effects of both job demands and work control (particularly task-related levels of control) on employee adjustment. To test these predictions, a study of 135 university employees holding administrative positions was undertaken. Methodological improvements over previous research included the use of both self-reported adjustment measures and supervisor ratings of work performance as outcome variables, and the assessment of the predictor and outcome measures at different points in time (self-reported adjustment was assessed at both Times 1 and 2). The results revealed some support for the proposal that the effects of job demands would be buffered by high levels of task control, but not more peripheral aspects of work control. There were also significant main effects of task control on job satisfaction.
Resumo:
Mycophenolic acid is an immunosuppressant administered as a bioavailable ester, mycophenolate mofetil. The pharmacokinetics of mycophenolic acid have been reported to be variable. Accurate measurement of concentrations of this drug could be important to adjust doses. The aim of this study was to compare the enzyme-multiplied immunoassay technique (EMIT [Dade Behring; San Jose, CA, U.S.A.]) for mycophenolic acid with a high-performance liquid chromatographic (HPLC) assay using samples collected from renal transplant recipients. The HPLC assay used solid phase extraction and a C18 stationary phase with ultraviolet (UV) detection (254 nm). The immunoassay required no manual sample preparation. Plasma samples (n = 102) from seven patients, collected at various times after a dose, were analyzed using both methods. Both assays fulfilled quality-control criteria. Higher concentrations were consistently measured in patient samples when using EMIT. The mean (+/- standard deviation [SD]) bias (EMIT-HPLC) was 1.88 +/- 0.86 mg/L. The differences in concentrations were higher in the middle of a dosage interval, suggesting that a metabolite might have been responsible for overestimation. Measurement of glucuronide concentrations by HPLC demonstrated only a weak correlation between assay differences and glucuronide concentrations. If the crossreacting substance is active, EMIT could provide a superior measure of immunosuppression; if inactive, further work is needed to improve antibody specificity. In conclusion, it was found that EMIT overestimates the concentration of mycophenolic acid in plasma samples from renal transplant recipients compared with HPLC analysis.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.