886 resultados para Multivariate measurement model
Resumo:
Experiments were undertaken to characterize a noninvasive chronic, model of nasal congestion in which nasal patency is measured using acoustic rhinometry. Compound 48/80 was administered intranasally to elicit nasal congestion in five beagle dogs either by syringe (0.5 ml) in thiopental sodium-anesthetized animals or as a mist (0.25 ml) in the same animals in the conscious state. Effects of mast cell degranulation on nasal cavity volume as well as on minimal cross-sectional area (A(min)) and intranasal distance to A(min) (D(min)) were studied. Compound 48/80 caused a dose-related decrease in nasal cavity volume and A(min) together with a variable increase in D(min). Maximal responses were seen at 90-120 min. Compound 48/80 was less effective in producing nasal congestion in conscious animals, which also had significantly larger basal nasal cavity volumes. These results demonstrate the utility of using acoustic rhinometry to measure parameters of nasal patency in dogs and suggest that this model may prove useful in studies of the actions of decongestant drugs.
Resumo:
An analytical model to predict strand slips within both transmission and anchorage lengths in pretensioned prestressed concrete members is presented. This model has been derived from an experimental research work by analysing the bond behavior and determining the transmission and anchorage lengths of seven-wire prestressing steel strands in different concrete mixes. A testing technique based on measuring the prestressing strand force in specimens with different embedment lengths has been used. The testing technique allows measurement of free end slip as well as indirect determination of the strand slip at different cross sections of a member without interfering with bond phenomena. The experimental results and the proposed model for strand slip distribution have been compared with theoretical predictions according to different equations in the literature and with experimental results obtained by other researchers. © 2013 Elsevier Ltd.
Resumo:
Model selection between competing models is a key consideration in the discovery of prognostic multigene signatures. The use of appropriate statistical performance measures as well as verification of biological significance of the signatures is imperative to maximise the chance of external validation of the generated signatures. Current approaches in time-to-event studies often use only a single measure of performance in model selection, such as logrank test p-values, or dichotomise the follow-up times at some phase of the study to facilitate signature discovery. In this study we improve the prognostic signature discovery process through the application of the multivariate partial Cox model combined with the concordance index, hazard ratio of predictions, independence from available clinical covariates and biological enrichment as measures of signature performance. The proposed framework was applied to discover prognostic multigene signatures from early breast cancer data. The partial Cox model combined with the multiple performance measures were used in both guiding the selection of the optimal panel of prognostic genes and prediction of risk within cross validation without dichotomising the follow-up times at any stage. The signatures were successfully externally cross validated in independent breast cancer datasets, yielding a hazard ratio of 2.55 [1.44, 4.51] for the top ranking signature.
Resumo:
This study presents a model based on partial least squares (PLS) regression for dynamic line rating (DLR). The model has been verified using data from field measurements, lab tests and outdoor experiments. Outdoor experimentation has been conducted both to verify the model predicted DLR and also to provide training data not available from field measurements, mainly heavily loaded conditions. The proposed model, unlike the direct measurement based DLR techniques, enables prediction of line rating for periods ahead of time whenever a reliable weather forecast is available. The PLS approach yields a very simple statistical model that accurately captures the physical performance of the conductor within a given environment without requiring a predetermination of parameters as required by many physical modelling techniques. Accuracy of the PLS model has been tested by predicting the conductor temperature for measurement sets other than those used for training. Being a linear model, it is straightforward to estimate the conductor ampacity for a set of predicted weather parameters. The PLS estimated ampacity has proven its accuracy through an outdoor experiment on a piece of the line conductor in real weather conditions.
Plasma total homocysteine and carotid intima-media thickness in type 1 diabetes: A prospective study
Resumo:
Objective: Plasma total homocysteine (tHcy) has been positively associated with carotid intima-media thickness (IMT) in non-diabetic populations and in a few cross-sectional studies of diabetic patients. We investigated cross-sectional and prospective associations of a single measure of tHcy with common and internal carotid IMT over a 6-year period in type 1 diabetes. Research design and methods: tHcy levels were measured once, in plasma obtained in 1997–1999 from patients (n = 599) in the Epidemiology of Diabetes Interventions and Complications (EDIC) study, the observational follow-up of the Diabetes Control and Complications Trial (DCCT). Common and internal carotid IMT were determined twice, in EDIC “Year 6” (1998–2000) and “Year 12” (2004–2006), using B-mode ultra-sonography. Results: After adjustment, plasma tHcy [median (interquartile range): 6.2 (5.1, 7.5) μmol/L] was significantly correlated with age, diastolic blood pressure, renal dysfunction, and smoking (all p < 0.05). In an unadjusted model only, increasing quartiles of tHcy correlated with common and internal carotid IMT, again at both EDIC time-points (p < 0.01). However, multivariate logistic regression revealed no significant associations between increasing quartiles of tHcy and the 6-year change in common and internal carotid IMT (highest vs. lowest quintile) when adjusted for conventional risk factors. Conclusions: In a type 1 diabetes cohort from the EDIC study, plasma tHcy measured in samples drawn in 1997–1999 was associated with measures of common and internal carotid IMT measured both one and seven years later, but not with IMT progression between the two time-points. The data do not support routine measurement of tHcy in people with Type 1 diabetes.
Resumo:
From the early 1900s, some psychologists have attempted to establish their discipline as a quantitative science. In using quantitative methods to investigate their theories, they adopted their own special definition of measurement of attributes such as cognitive abilities, as though they were quantities of the type encountered in Newtonian science. Joel Michell has presented a carefully reasoned argument that psychological attributes lack additivity, and therefore cannot be quantities in the same way as the attributes of classical Newtonian physics. In the early decades of the 20th century, quantum theory superseded Newtonian mechanics as the best model of physical reality. This paper gives a brief, critical overview of the evolution of current measurement practices in psychology, and suggests the need for a transition from a Newtonian to a quantum theoretical paradigm for psychological measurement. Finally, a case study is presented that considers the implications of a quantum theoretical model for educational measurement. In particular, it is argued that, since the OECD’s Programme for International Student Assessment (PISA) is predicated on a Newtonian conception of measurement, this may constrain the extent to which it can make accurate comparisons of the achievements of different education systems.
Resumo:
The methane solubility in five pure electrolyte solvents and one binary solvent mixture for lithium ion batteries – such as ethylene carbonate (EC), propylene carbonate (PC), dimethyl carbonate (DMC), ethyl methyl carbonate (EMC), diethyl carbonate (DEC) and the (50:50 wt%) mixture of EC:DMC was studied experimentally at pressures close to atmospheric and as a function of temperature between (280 and 343) K by using an isochoric saturation technique. The effect of the selected anions of a lithium salt LiX (X = hexafluorophosphate,
<img height="16" border="0" style="vertical-align:bottom" width="27" alt="View the MathML source" title="View the MathML source" src="http://origin-ars.els-cdn.com/content/image/1-s2.0-S0021961414002146-si1.gif">PF6-; tris(pentafluoroethane)trifluorurophosphate, FAP−; bis(trifluoromethylsulfonyl)imide, TFSI−) on the methane solubility in electrolytes for lithium ion batteries was then investigated using a model electrolyte based on the binary mixture of EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt in the same temperature and pressure ranges. Based on experimental solubility data, the Henry’s law constant of the methane in these solutions were then deduced and compared together and with those predicted by using COSMO-RS methodology within COSMOthermX software. From this study, it appears that the methane solubility in each pure solvent decreases with the temperature and increases in the following order: EC < PC < EC:EMC (50:50 wt%) < DMC < EMC < DEC, showing that this increases with the van der Walls force in solution. Additionally, in all investigated EC:DMC (50:50 wt%) + 1 mol · dm−3 of lithium salt electrolytes, the methane solubility decreases also with the temperature and the methane solubility is higher in the electrolyte containing the LiFAP salt, followed by that based on the LiTFSI one. From the variation of the Henry’s law constants with the temperature, the partial molar thermodynamic functions of solvation, such as the standard Gibbs free energy, the enthalpy, and the entropy where then calculated, as well as the mixing enthalpy of the solvent with methane in its hypothetical liquid state. Finally, the effect of the gas structure on their solubility in selected solutions was discussed by comparing methane solubility data reported in the present work with carbon dioxide solubility data available in the same solvents or mixtures to discern the more harmful gas generated during the degradation of the electrolyte, which limits the battery lifetime.
Resumo:
The outcomes of educational assessments undoubtedly have real implications for students, teachers, schools and education in the widest sense. Assessment results are, for example, used to award qualifications that determine future educational or vocational pathways of students. The results obtained by students in assessments are also used to gauge individual teacher quality, to hold schools to account for the standards achieved by their students, and to compare international education systems. Given the current high-stakes nature of educational assessment, it is imperative that the measurement practices involved have stable philosophical foundations. However, this paper casts doubt on the theoretical underpinnings of contemporary educational measurement models. Aspects of Wittgenstein’s later philosophy and Bohr’s philosophy of quantum theory are used to argue that a quantum theoretical rather than a Newtonian model is appropriate for educational measurement, and the associated implications for the concept of validity are elucidated. Whilst it is acknowledged that the transition to a quantum theoretical framework would not lead to the demise of educational assessment, it is argued that, where practical, current high-stakes assessments should be reformed to become as ‘low-stakes’ as possible. The paper also undermines some of the pro high-stakes testing rhetoric that has a tendency to afflict education.
Resumo:
The Consideration of Future Consequences construct has been found to relate meaningfully to several positive outcomes in temporal research. Researchers have proposed 1-factor, 2-factor, and bifactor solutions to the Consideration of Future Consequences Scale (CFCS). Using 313 British University undergraduates, we tested four competing models: (a) a 12-item unidimensional model, (b) a model fitted for two uncorrelated factors (CFC-Immediate and CFC-Future), (c) a model fitted for two correlated factors (CFC-I and CFC-F), and (d) a bifactor model. Results supported the bifactor model, suggesting that the two hypothesized factors are better understood as grouping factors. Accordingly, the present study supports the CFCS as a unidimensional global future orientation measure. These results have important implications for the study of future orientation using the CFCS. Researchers using the CFCS are encouraged to examine a bifactor solution for the scores.
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Resumo:
Biodegradable polymers, such as PLA (Polylactide), come from renewable resources like corn starch and if disposed of correctly, degrade and become harmless to the ecosystem making them attractive alternatives to petroleum based polymers. PLA in particular is used in a variety of applications including medical devices, food packaging and waste disposal packaging. However, the industry faces challenges in melt processing of PLA due to its poor thermal stability which is influenced by processing temperatures and shearing.
Identification and control of suitable processing conditions is extremely challenging, usually relying on trial and error, and often sensitive to batch to batch variations. Off-line assessment in a lab environment can result in high scrap rates, long lead times and lengthy and expensive process development. Scrap rates are typically in the region of 25-30% for medical grade PLA costing between €2000-€5000/kg.
Additives are used to enhance material properties such as mechanical properties and may also have a therapeutic role in the case of bioresorbable medical devices, for example the release of calcium from orthopaedic implants such as fixation screws promotes healing. Additives can also reduce the costs involved as less of the polymer resin is required.
This study investigates the scope for monitoring, modelling and optimising processing conditions for twin screw extrusion of PLA and PLA w/calcium carbonate to achieve desired material properties. A DAQ system has been constructed to gather data from a bespoke measurement die comprising melt temperature; pressure drop along the length of the die; and UV-Vis spectral data which is shown to correlate to filler dispersion. Trials were carried out under a range of processing conditions using a Design of Experiments approach and samples were tested for mechanical properties, degradation rate and the release rate of calcium. Relationships between recorded process data and material characterisation results are explored.
Resumo:
The purpose of this paper is to conceptualise and operationalise the concept of supply chain management sustainability practices. Based on a multi-stage procedure involving a literature review, expert Q-sort and pre-test process, pilot test and survey of 156 supply chain directors and managers in Ireland, we develop a multidimensional conceptualisation and measure of social and environmental supply chain management sustainability practices. The research findings show theoretically sound constructs based on four underlying sustainable supply chain management practices: monitoring, implementing systems, new product and process development and strategy redefinition. A two-factor model is then identified as the most reliable: comprising process-based and market-based practices.
Resumo:
An experimental study measuring the performance and wake characteristics of a 1:10th scale horizontal axis turbine in steady uniform flow conditions is presented in this paper.
Large scale towing tests conducted in a lake were devised to model the performance of the tidal turbine and measure the wake produced. As a simplification of the marine environment, towing the turbine in a lake provides approximately steady, uniform inflow conditions. A 16m long x 6m wide catamaran was constructed for the test programme. This doubled as a towing rig and flow measurement platform, providing a fixed frame of reference for measurements in the wake of a horizontal axis tidal turbine. Velocity mapping was conducted using Acoustic Doppler Velocimeters.
The results indicate varying the inflow speed yielded little difference in the efficiency of the turbine or the wake velocity deficit characteristics provided the same tip speed ratio is used. Increasing the inflow velocity from 0.9 m/s to 1.2 m/s influenced the turbulent wake characteristics more markedly. The results also demonstrate that the flow field in the wake of a horizontal axis tidal turbine is strongly affected by the turbine support structure
Resumo:
This paper investigates the potential for using the windowed variance of the received signal strength to select from a set of predetermined channel models for a wireless ranging or localization system. An 868 MHz based measurement system was used to characterize the received signal strength (RSS) of the off-body link formed between two wireless nodes attached to either side of a human thorax and six base stations situated in the local surroundings.
Resumo:
PURPOSE:
We sought to measure the impact of central corneal thickness (CCT), a possible risk factor for glaucoma damage, and corneal hysteresis, a proposed measure of corneal resistance to deformation, on various indicators of glaucoma damage.
DESIGN:
Observational study.
METHODS:
Adult patients of the Wilmer Glaucoma Service underwent measurement of hysteresis on the Reichert Ocular Response Analyzer and measurement of CCT by ultrasonic pachymetry. Two glaucoma specialists (H.A.Q., N.G.C.) reviewed the chart to determine highest known intraocular pressure (IOP), target IOP, diagnosis, years with glaucoma, cup-to-disk ratio (CDR), mean defect (MD), pattern standard deviation (PSD), glaucoma hemifield test (GHT), and presence or absence of visual field progression.
RESULTS:
Among 230 subjects, the mean age was 65 +/- 14 years, 127 (55%) were female, 161 (70%) were white, and 194 (85%) had a diagnosis of primary open-angle glaucoma (POAG) or suspected POAG. In multivariate generalized estimating equation models, lower corneal hysteresis value (P = .03), but not CCT, was associated with visual field progression. When axial length was included in the model, hysteresis was not a significant risk factor (P = .09). A thinner CCT (P = .02), but not hysteresis, was associated with a higher CDR at the most recent examination. Neither CCT nor hysteresis was associated with MD, PSD, or GHT "outside normal limits."
CONCLUSIONS:
Thinner CCT was associated with the state of glaucoma damage as indicated by CDR. Axial length and corneal hysteresis were associated with progressive field worsening.