44 resultados para Measurement model
Resumo:
From the early 1900s, some psychologists have attempted to establish their discipline as a quantitative science. In using quantitative methods to investigate their theories, they adopted their own special definition of measurement of attributes such as cognitive abilities, as though they were quantities of the type encountered in Newtonian science. Joel Michell has presented a carefully reasoned argument that psychological attributes lack additivity, and therefore cannot be quantities in the same way as the attributes of classical Newtonian physics. In the early decades of the 20th century, quantum theory superseded Newtonian mechanics as the best model of physical reality. This paper gives a brief, critical overview of the evolution of current measurement practices in psychology, and suggests the need for a transition from a Newtonian to a quantum theoretical paradigm for psychological measurement. Finally, a case study is presented that considers the implications of a quantum theoretical model for educational measurement. In particular, it is argued that, since the OECD’s Programme for International Student Assessment (PISA) is predicated on a Newtonian conception of measurement, this may constrain the extent to which it can make accurate comparisons of the achievements of different education systems.
Resumo:
The methane solubility in five pure electrolyte solvents and one binary solvent mixture for lithium ion batteries – such as ethylene carbonate (EC), propylene carbonate (PC), dimethyl carbonate (DMC), ethyl methyl carbonate (EMC), diethyl carbonate (DEC) and the (50:50 wt%) mixture of EC:DMC was studied experimentally at pressures close to atmospheric and as a function of temperature between (280 and 343) K by using an isochoric saturation technique. The effect of the selected anions of a lithium salt LiX (X = hexafluorophosphate,
<img height="16" border="0" style="vertical-align:bottom" width="27" alt="View the MathML source" title="View the MathML source" src="http://origin-ars.els-cdn.com/content/image/1-s2.0-S0021961414002146-si1.gif">PF6-; tris(pentafluoroethane)trifluorurophosphate, FAP−; bis(trifluoromethylsulfonyl)imide, TFSI−) on the methane solubility in electrolytes for lithium ion batteries was then investigated using a model electrolyte based on the binary mixture of EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt in the same temperature and pressure ranges. Based on experimental solubility data, the Henry’s law constant of the methane in these solutions were then deduced and compared together and with those predicted by using COSMO-RS methodology within COSMOthermX software. From this study, it appears that the methane solubility in each pure solvent decreases with the temperature and increases in the following order: EC < PC < EC:EMC (50:50 wt%) < DMC < EMC < DEC, showing that this increases with the van der Walls force in solution. Additionally, in all investigated EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt electrolytes, the methane solubility decreases also with the temperature and the methane solubility is higher in the electrolyte containing the LiFAP salt, followed by that based on the LiTFSI one. From the variation of the Henry’s law constants with the temperature, the partial molar thermodynamic functions of solvation, such as the standard Gibbs free energy, the enthalpy, and the entropy where then calculated, as well as the mixing enthalpy of the solvent with methane in its hypothetical liquid state. Finally, the effect of the gas structure on their solubility in selected solutions was discussed by comparing methane solubility data reported in the present work with carbon dioxide solubility data available in the same solvents or mixtures to discern the more harmful gas generated during the degradation of the electrolyte, which limits the battery lifetime.
Resumo:
The outcomes of educational assessments undoubtedly have real implications for students, teachers, schools and education in the widest sense. Assessment results are, for example, used to award qualifications that determine future educational or vocational pathways of students. The results obtained by students in assessments are also used to gauge individual teacher quality, to hold schools to account for the standards achieved by their students, and to compare international education systems. Given the current high-stakes nature of educational assessment, it is imperative that the measurement practices involved have stable philosophical foundations. However, this paper casts doubt on the theoretical underpinnings of contemporary educational measurement models. Aspects of Wittgenstein’s later philosophy and Bohr’s philosophy of quantum theory are used to argue that a quantum theoretical rather than a Newtonian model is appropriate for educational measurement, and the associated implications for the concept of validity are elucidated. Whilst it is acknowledged that the transition to a quantum theoretical framework would not lead to the demise of educational assessment, it is argued that, where practical, current high-stakes assessments should be reformed to become as ‘low-stakes’ as possible. The paper also undermines some of the pro high-stakes testing rhetoric that has a tendency to afflict education.
Resumo:
The Consideration of Future Consequences construct has been found to relate meaningfully to several positive outcomes in temporal research. Researchers have proposed 1-factor, 2-factor, and bifactor solutions to the Consideration of Future Consequences Scale (CFCS). Using 313 British University undergraduates, we tested four competing models: (a) a 12-item unidimensional model, (b) a model fitted for two uncorrelated factors (CFC-Immediate and CFC-Future), (c) a model fitted for two correlated factors (CFC-I and CFC-F), and (d) a bifactor model. Results supported the bifactor model, suggesting that the two hypothesized factors are better understood as grouping factors. Accordingly, the present study supports the CFCS as a unidimensional global future orientation measure. These results have important implications for the study of future orientation using the CFCS. Researchers using the CFCS are encouraged to examine a bifactor solution for the scores.
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Resumo:
The purpose of this paper is to conceptualise and operationalise the concept of supply chain management sustainability practices. Based on a multi-stage procedure involving a literature review, expert Q-sort and pre-test process, pilot test and survey of 156 supply chain directors and managers in Ireland, we develop a multidimensional conceptualisation and measure of social and environmental supply chain management sustainability practices. The research findings show theoretically sound constructs based on four underlying sustainable supply chain management practices: monitoring, implementing systems, new product and process development and strategy redefinition. A two-factor model is then identified as the most reliable: comprising process-based and market-based practices.
Resumo:
An experimental study measuring the performance and wake characteristics of a 1:10th scale horizontal axis turbine in steady uniform flow conditions is presented in this paper.
Large scale towing tests conducted in a lake were devised to model the performance of the tidal turbine and measure the wake produced. As a simplification of the marine environment, towing the turbine in a lake provides approximately steady, uniform inflow conditions. A 16m long x 6m wide catamaran was constructed for the test programme. This doubled as a towing rig and flow measurement platform, providing a fixed frame of reference for measurements in the wake of a horizontal axis tidal turbine. Velocity mapping was conducted using Acoustic Doppler Velocimeters.
The results indicate varying the inflow speed yielded little difference in the efficiency of the turbine or the wake velocity deficit characteristics provided the same tip speed ratio is used. Increasing the inflow velocity from 0.9 m/s to 1.2 m/s influenced the turbulent wake characteristics more markedly. The results also demonstrate that the flow field in the wake of a horizontal axis tidal turbine is strongly affected by the turbine support structure
Resumo:
This paper investigates the potential for using the windowed variance of the received signal strength to select from a set of predetermined channel models for a wireless ranging or localization system. An 868 MHz based measurement system was used to characterize the received signal strength (RSS) of the off-body link formed between two wireless nodes attached to either side of a human thorax and six base stations situated in the local surroundings.
Resumo:
PURPOSE: To estimate the relationships between ocular parameters and tonometrically measured intraocular pressure (IOP), to determine the influence of ocular parameters on different instrument measurements of IOP, and to evaluate the association of ocular parameters with a parameter called hysteresis. METHODS: Patients presenting at a glaucoma clinic were recruited for this study. Subjects underwent IOP measurement with the Goldmann applanation tonometer (GAT), the TonoPen, and the Reichert Ocular Response Analyzer (ORA), and also measurements of central corneal thickness (CCT), axial length, corneal curvature, corneal astigmatism, central visual acuity, and refractive error. Chart information was reviewed to determine glaucoma treatment history. The ORA instrument provided a measurement called corneal hysteresis. The association between measured IOP and the other ocular characteristics was estimated using generalized estimating equations. RESULTS: Among 230 patients, IOP measurements from the TonoPen read lowest, and ORA read highest, and GAT measurements were closest to the mean IOP of the 3 instruments. In a multiple regression model adjusting for age, sex, race, and other ocular characteristics, a 10 microm increase in CCT was associated with an increase of 0.79 mm Hg measured IOP in untreated eyes (P<0.0001). Of the 3 tonometers, GAT was the least affected by CCT (0.66 mm Hg/10 mum, P<0.0001). Hysteresis was significantly correlated with CCT with a modest correlation coefficient (r=0.20, P<0.0007). CONCLUSIONS: Among parameters related to measured IOP, features in addition to CCT, such as hysteresis and corneal curvature, may also be important. Tonometric instruments seem to be affected differently by various physiologic characteristics.
Resumo:
Statistical distributions have been extensively used in modeling fading effects in conventional and modern wireless communications. In the present work, we propose a novel κ − µ composite shadowed fading model, which is based on the valid assumption that the mean signal power follows the inverse gamma distribution instead of the lognormal or commonly used gamma distributions. This distribution has a simple relationship with the gamma distribution, but most importantly, its semi heavy-tailed characteristics constitute it suitable for applications relating to modeling of shadowed fading. Furthermore, the derived probability density function of the κ − µ / inverse gamma composite distribution admits a rather simple algebraic representation that renders it convenient to handle both analytically and numerically. The validity and utility of this fading model are demonstrated by means of modeling the fading effects encountered in body centric communications channels, which have been known to be susceptible to the shadowing effect. To this end, extensive comparisons are provided between theoretical and respective real-time measurement results. It is shown that these comparisons exhibit accurate fitting of the new model for various measurement set ups that correspond to realistic communication scenarios.
Resumo:
In this paper we propose a new composite fadingmodel which assumes that the mean signal power of an η−µ signalenvelope follows an inverse gamma distribution. The inversegamma distribution has a simple relationship with the gammadistribution and can be used to model shadowed fading due to itssemi heavy-tailed characteristics. To demonstrate the utility of thenew η−µ / inverse gamma composite fading model, we investigatethe characteristics of the shadowed fading behavior observed inbody centric communications channels which are known to besusceptible to shadowing effects, particularly generated by thehuman body. It is shown that the η−µ / inverse gamma compositefading model provided an excellent fit to the measurement data.Moreover, using Kullback-Leibler divergence, the η −µ / inversegamma composite fading model was found to provide a better fitto the measured data than the κ − µ / inverse gamma compositefading model, for the communication scenarios considered here.
Resumo:
Background: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries.
Methods: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m2 [underweight], 18·5 kg/m2 to <20 kg/m2, 20 kg/m2 to <25 kg/m2, 25 kg/m2 to <30 kg/m2, 30 kg/m2 to <35 kg/m2, 35 kg/m2 to <40 kg/m2, ≥40 kg/m2 [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue.
Findings: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m2 (95% credible interval 21·3–22·1) in 1975 to 24·2 kg/m2 (24·0–24·4) in 2014 in men, and from 22·1 kg/m2 (21·7–22·5) in 1975 to 24·4 kg/m2 (24·2–24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m2 in central Africa and south Asia to 29·2 kg/m2 (28·6–29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m2 (21·4–22·3) in south Asia to 32·2 kg/m2 (31·5–32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5–17·4) to 8·8% (7·4–10·3) in men and from 14·6% (11·6–17·9) to 9·7% (8·3–11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8–29·2) in men and 24·0% (18·9–29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4–4·1) in 1975 to 10·8% (9·7–12·0) in 2014 in men, and from 6·4% (5·1–7·8) to 14·9% (13·6–16·1) in women. 2·3% (2·0–2·7) of the world's men and 5·0% (4·4–5·6) of women were severely obese (ie, have BMI ≥35 kg/m2). Globally, prevalence of morbid obesity was 0·64% (0·46–0·86) in men and 1·6% (1·3–1·9) in women.
Interpretation: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia.
Resumo:
In dynamic spectrum access networks, cognitive radio terminals monitor their spectral environment in order to detect and opportunistically access unoccupied frequency channels. The overall performance of such networks depends on the spectrum occupancy or availability patterns. Accurate knowledge on the channel availability enables optimum performance of such networks in terms of spectrum and energy efficiency. This work proposes a novel probabilistic channel availability model that can describe the channel availability in different polarizations for mobile cognitive radio terminals that are likely to change their orientation during their operation. A Gaussian approximation is used to model the empirical occupancy data that was obtained through a measurement campaign in the cellular frequency bands within a realistic operational scenario.
Resumo:
Increasing tungsten (W) use for industrial and military applications has resulted in greater W discharge into natural waters, soils and sediments. Risk modeling of W transport and fate in the environment relies on measurement of the release/mobilization flux of W in the bulk media and the interfaces between matrix compartments. Diffusive gradients in thin-films (DGT) is a promising passive sampling technique to acquire such information. DGT devices equipped with the newly developed high-resolution binding gels (precipitated zirconia, PZ, or ferrihydrite, PF, gels) or classic/conventional ferrihydrite slurry gel were comprehensively assessed for measuring W in waters. FerrihydriteDGT can measure W at various ionic strengths (0.001–0.5 mol L−1 NaNO3) and pH (4–8), while PZDGT can operate across slightly wider environmental conditions. The three DGT configurations gave comparable results for soil W measurement, showing that typically W resupply is relatively poorly sustained. 1D and 2D high-resolution W profiling across sediment—water and hotspot—bulk media interfaces from Lake Taihu were obtained using PZDGT coupled with laser ablation ICP–MS measurement, and the apparent diffusion fluxes across the interfaces were calculated using a numerical model.