940 resultados para utility measurement
Resumo:
This paper revisits Diamond’s classical impossibility result regarding the ordering of infinite utility streams. We show that if no representability condition is imposed, there do exist strongly Paretian and finitely anonymous orderings of intertemporal utility streams with attractive additional properties. We extend a possibility theorem due to Svensson to a characterization theorem and we provide characterizations of all strongly Paretian and finitely anonymous rankings satisfying the strict transfer principle. In addition, infinite horizon extensions of leximin and of utilitarianism are characterized by adding an equity preference axiom and finite translation-scale measurability, respectively, to strong Pareto and finite anonymity.
Resumo:
Wind catcher systems have been employed in buildings in the Middle East for many centuries and they are known by different names in different parts of the region. Recently there has been an increase in the application of this approach for natural ventilation and passive cooling in the UK and other countries. This paper presents the results of experimental wind tunnel and smoke visualisation testing, combined with CFD modelling, to investigate the performance of the wind catcher. For this purpose, a full-scale commercial system was connected to a test room and positioned centrally in an open boundary wind tunnel. Because much ventilation design involves the use of computational fluid dynamics, the measured performance of the system was also compared against the results of CFD analysis. Configurations included both a heated and unheated space to determine the impact of internal heat sources on airflow rate. Good comparisons between measurement and CFD analysis were obtained. Measurements showed that sufficient air change could be achieved to meet both air quality needs and passive cooling.
Resumo:
The authors identified several specific problems with the measurement of achievement goals in the current literature and illustrated these problems, focusing primarily on A. J. Elliot and H. A. McGregor's (2001) Achievement Goal Questionnaire (AGQ). They attended to these problems by creating the AGQ-Revised and conducting a study that examined the measure's structural validity and predictive utility with 229 (76 male, 150 female, 3 unspecified) undergraduates. The hypothesized factor and dimensional structures of the measure were confirmed and shown to be superior to a host of alternatives. The predictions were nearly uniformly supported with regard to both the antecedents (need for achievement and fear of failure) and consequences (intrinsic motivation and exam performance) of the 4 achievement goals. In discussing their work, the authors highlight the importance and value of additional precision in the area of achievement goal measurement. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)
Resumo:
We generalize the standard linear-response (Kubo) theory to obtain the conductivity of a system that is subject to a quantum measurement of the current. Our approach can be used to specifically elucidate how back-action inherent to quantum measurements affects electronic transport. To illustrate the utility of our general formalism, we calculate the frequency-dependent conductivity of graphene and discuss the effect of measurement-induced decoherence on its value in the dc limit. We are able to resolve an ambiguity related to the parametric dependence of the minimal conductivity.
Resumo:
this article addresses the welfare and macroeconomics effects of fiscal policy in a frarnework where govemment chooses tax rates and the distribution of revenues between consumption and investment. We construct and simulate a model where public consumption affects individuaIs' utility and public capital is an argument of the production function. The simulations suggest that by simply reallocating expenditures from consumption to investment, the govemment can increase the equilibrium leveIs of capital stock, hours worked, output and labor productivity. Funhennore, we 'show that the magnitude and direction of the long run impact of fiscal policy depends on the size of the elasticity of output to public capital. If this parameter is high enough, it may be the case that capital stock, within limits, increases with tax rates.
Resumo:
Includes bibliography
Resumo:
Nowadays, the zinc oxide surge arresters (ZnO) are widely used in power systems, however, a large number of silicon carbide surge arresters (SiC) are still in service in the utilities. On the other hand, it is not possible to replace all SiC surge arresters in a short time period, being necessary to review the maintenance program taking into account the surge arresters that are more degraded. In this context, a research project was established between the University of Sao Paulo and the electrical utility CTEEP, aiming the investigation of its SiC surge arresters. This work shows that the leakage current measurement, a diagnostic method for the ZnO surge arresters, can provide useful information related to the condition of the SiC surge arresters. Analysis of the amplitude and distortion of the leakage current, also considering thermovision measurements, resulted in better evaluation of the SiC surge arresters.
Resumo:
The objective of this pilot investigation was to evaluate the utility and precision of already existing limited cone-beam computed tomography (CBCT) scans in measuring the endodontic working length, and to compare it with standard clinical procedures.
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
Firn microstructure is accurately characterized using images obtained from scanning electron microscopy (SEM). Visibly etched grain boundaries within images are used to create a skeleton outline of the microstructure. A pixel-counting utility is applied to the outline to determine grain area. Firn grain sizes calculated using the technique described here are compared to those calculated using the techniques of Cow (1969) and Gay and Weiss (1999) on samples of the same material, and are found to be substantially smaller. The differences in grain size between the techniques are attributed to sampling deficiencies (e.g. the inclusion of pore filler in the grain area) in earlier methods. The new technique offers the advantages of greater accuracy and the ability to determine individual components of the microstructure (grain and pore), which have important applications in ice-core analyses. The new method is validated by calculating activation energies of grain boundary diffusion using predicted values based on the ratio of grain-size measurements between the new and existing techniques. The resulting activation energy falls within the range of values previously reported for firn/ice.
Resumo:
BACKGROUND: In clinical practise the high dose ACTH stimulation test (HDT) is frequently used in the assessment of adrenal insufficiency (AI). However, there is uncertainty regarding optimal time-points and number of blood samplings. The present study compared the utility of a single cortisol value taken either 30 or 60 minutes after ACTH stimulation with the traditional interpretation of the HDT. METHODS: Retrospective analysis of 73 HDT performed at a single tertiary endocrine centre. Serum cortisol was measured at baseline, 30 and 60 minutes after intravenous administration of 250 µg synthetic ACTH1-24. Adrenal insufficiency (AI) was defined as a stimulated cortisol level <550 nmol/l. RESULTS: There were twenty patients (27.4%) who showed an insufficient rise in serum cortisol using traditional HDT criteria and were diagnosed to suffer from AI. There were ten individuals who showed insufficient cortisol values after 30 minutes, rising to sufficient levels at 60 minutes. All patients revealing an insufficient cortisol response result after 60 minutes also had an insufficient result after 30 minutes. The cortisol value taken after 30 minutes did not add incremental diagnostic value in any of the cases under investigation compared with the 60 minutes' sample. CONCLUSIONS: Based on the findings of the present analysis the utility of a cortisol measurement 30 minutes after high dose ACTH injection was low and did not add incremental diagnostic value to a single measurement after 60 minutes.
Resumo:
REASONS FOR PERFORMING STUDY: The diagnosis of equine back disorders is challenging. Objectively determining movement of the vertebral column may therefore be of value in a clinical setting. OBJECTIVES: To establish whether surface-mounted inertial measurement units (IMUs) can be used to establish normal values for range of motion (ROM) of the vertebral column in a uniform population of horses trotting under different conditions. STUDY DESIGN: Vertebral ROM was established in Franches-Montagnes stallions and a general population of horses and the variability in measurements compared between the two groups. Repeatability and the influence of specific exercise condition (on ROM) were assessed. Finally, attempts were made to explain the findings of the study through the evaluation of factors that might influence ROM. METHODS: Dorsoventral (DV) and mediolateral (ML) vertebral ROM was measured at a trot under different exercise conditions in 27 Franches-Montagnes stallions and six general population horses using IMUs distributed over the vertebral column. RESULTS: Variability in the ROM measurements was significantly higher for general population horses than for Franches-Montagnes stallions (both DV and ML ROM). Repeatability was strong to very strong for DV measurements and moderate for ML measurements. Trotting under saddle significantly reduced the ROM, with sitting trot resulting in a significantly lower ROM than rising trot. Age is unlikely to explain the low variability in vertebral ROM recorded in the Franches-Montagnes horses, while this may be associated with conformational factors. CONCLUSIONS: It was possible to establish a normal vertebral ROM for a group of Franches-Montagnes stallions. While within-breed variation was low in this population, further studies are necessary to determine variation in vertebral ROM for other breeds and to assess their utility for diagnosis of equine back disorders.
Resumo:
Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^
Resumo:
A simple and inexpensive method is described for analysis of uranium (U) activity and mass in water by liquid scintillation counting using $\alpha$/$\beta$ discrimination. This method appears to offer a solution to the need for an inexpensive protocol for monitoring U activity and mass simultaneously and an alternative to the potential inaccuracy involved when depending on the mass-to-activity conversion factor or activity screen.^ U is extracted virtually quantitatively into 20 ml extractive scintillator from a 1-$\ell$ aliquot of water acidified to less than pH 2. After phase separation, the sample is counted for a 20-minute screening count with a minimum detection level of 0.27 pCi $\ell\sp{-1}$. $\alpha$-particle emissions from the extracted U are counted with close to 100% efficiency with a Beckman LS6000 LL liquid scintillation counter equipped with pulse-shape discrimination electronics. Samples with activities higher than 10 pCi $\ell\sp-1$ are recounted for 500-1000 minutes for isotopic analysis. Isotopic analysis uses events that are automatically stored in spectral files and transferred to a computer during assay. The data can be transferred to a commercially available spreadsheet and retrieved for examination or data manipulation. Values for three readily observable spectral features can be rapidly identified by data examination and substituted into a simple formula to obtain $\sp{234}$U/$\sp{238}$U ratio for most samples. U mass is calculated by substituting the isotopic ratio value into a simple equation.^ The utility of this method for the proposed compliance monitoring of U in public drinking water supplies was field tested with a survey of drinking water from Texas supplies that had previously been known to contain elevated levels of gross $\alpha$ activity. U concentrations in 32 samples from 27 drinking water supplies ranged from 0.26 to 65.5 pCi $\ell\sp{-1}$, with seven samples exceeding the proposed Maximum Contaminant Level of 20 $\mu$g $\ell\sp{-1}$. Four exceeded the proposed activity screening level of 30 pCi $\ell\sp{-1}$. Isotopic ratios ranged from 0.87 to 41.8, while one sample contained $\sp{234}$U activity of 34.6 pCi $\ell\sp{-1}$ in the complete absence of its parent, $\sp{238}$U. U mass in the samples with elevated activity ranged from 0.0 to 103 $\mu$g $\ell\sp{-1}$. A limited test of screening surface and groundwaters for contamination by U from waste sites and natural processes was also successful. ^
Resumo:
The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.