960 resultados para Work Measurement


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Precise temperature measurements in the magnetic field are indispensable for MR safety studies and for temperature calibration during MR-guided thermotherapy. In this work, the interference of two commonly used fiber-optical temperature measurement systems with the static magnetic field B0 was determined. METHODS Two fiber-optical temperature measurement systems, a GaAs-semiconductor and a phosphorescent phosphor ceramic, were compared for temperature measurements in B0 . The probes and a glass thermometer for reference were placed in an MR-compatible tube phantom within a water bath. Temperature measurements were carried out at three different MR systems covering static magnetic fields up to B0  = 9.4T, and water temperatures were changed between 25°C and 65°C. RESULTS The GaAs-probe significantly underestimated absolute temperatures by an amount related to the square of B0 . A maximum difference of ΔT = -4.6°C was seen at 9.4T. No systematic temperature difference was found with the phosphor ceramic probe. For both systems, the measurements were not dependent on the orientation of the sensor to B0 . CONCLUSION Temperature measurements with the phosphor ceramic probe are immune to magnetic fields up to 9.4T, whereas the GaAs-probes either require a recalibration inside the MR system or a correction based on the square of B0 . Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research has shown that physical activity serves a preventive function against the development of several major chronic diseases. However, studying physical activity and its health benefits is difficult due to the complexity of measuring physical activity. The overall aim of this research is to contribute to the knowledge of both correlates and measurement of physical activity. Data from the Women On The Move study were used for this study (n = 260), and the results are presented in three papers. The first paper focuses on the measurement of physical activity and compares an alternate coding method with the standard coding method for calculating energy expenditure from a 7-day activity diary. Results indicate that the alternative coding scheme could produce similar results to the standard coding in terms of total activity expenditure. Even though agreement could not be achieved by dimension, the study lays the groundwork for a coding system that saves considerable amount of time in coding activity and has the ability to estimate expenditure more accurately for activities that can be performed at varying intensity levels. The second paper investigates intra-day variability in physical activity by estimating the variation in energy expenditure for workers and non-workers and identifying the number of days of diary self-report necessary to reliably estimate activity. The results indicate that 8 days of activity are needed to reliably estimate total activity for individuals who don't work and 12 days of activity are needed to reliably estimate total activity for those who work. Days of diary self-report required by dimension for those who don't work range from 6 to 16 and for those who work from 6 to 113. The final paper presents findings on the relationship between daily living activity and Type A behavior pattern. Significant findings are observed for total activity and leisure activity with the Temperament Scale summary score. Significant findings are also observed for total activity, household chores, work, leisure activity, exercise, and inactivity with one or more of the individual items on the Temperament Scale. However, even though some significant findings were observed, the overall models did not reveal meaningful associations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This small pilot study compared the effectiveness of two interventions to improve automaticity with basic addition facts: Taped Problems (TP) and Cover, Copy, Compare (CCC), in students aged 6-10. Automaticity was measured using Mathematics Curriculum-Based Measurement (M-CBM) at pretest, after 10 days, and after 20 days of intervention. Our hypothesis was that the TP group will gain higher levels of automaticity more quickly than the CCC and control groups. However, when gain scores were compared, no significant differences were found between groups. Limitations to the study include low treatment integrity and a short duration of intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Work Limitations Questionnaire (WLQ) is used to determine the amount of work loss and productivity which stem from certain health conditions, including rheumatoid arthritis and cancer. The questionnaire is currently scored using methodology from Classical Test Theory. Item Response Theory, on the other hand, is a theory based on analyzing item responses. This study wanted to determine the validity of using Item Response Theory (IRT), to analyze data from the WLQ. Item responses from 572 employed adults with dysthymia, major depressive disorder (MDD), double depressive disorder (both dysthymia and MDD), rheumatoid arthritis and healthy individuals were used to determine the validity of IRT (Adler et al., 2006).^ PARSCALE, which is IRT software from Scientific Software International, Inc., was used to calculate estimates of the work limitations based on item responses from the WLQ. These estimates, also known as ability estimates, were then correlated with the raw score estimates calculated from the sum of all the items responses. Concurrent validity, which claims a measurement is valid if the correlation between the new measurement and the valid measurement is greater or equal to .90, was used to determine the validity of IRT methodology for the WLQ. Ability estimates from IRT were found to be somewhat highly correlated with the raw scores from the WLQ (above .80). However, the only subscale which had a high enough correlation for IRT to be considered valid was the time management subscale (r = .90). All other subscales, mental/interpersonal, physical, and output, did not produce valid IRT ability estimates.^ An explanation for these lower than expected correlations can be explained by the outliers found in the sample. Also, acquiescent responding (AR) bias, which is caused by the tendency for people to respond the same way to every question on a questionnaire, and the multidimensionality of the questionnaire (the WLQ is composed of four dimensions and thus four different latent variables) probably had a major impact on the IRT estimates. Furthermore, it is possible that the mental/interpersonal dimension violated the monotonocity assumption of IRT causing PARSCALE to fail to run for these estimates. The monotonicity assumption needs to be checked for the mental/interpersonal dimension. Furthermore, the use of multidimensional IRT methods would most likely remove the AR bias and increase the validity of using IRT to analyze data from the WLQ.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since ten years ago, it is intensely thought of human work and employment in developed countries as a consequence of the deep changes on the productive system. In this chapter we analyze at first place the concept of work and its evolution, based on recent researches. In second place we review the notions of activity, work and employment and also the definition and measurement of unemployment, which is not a very old notion. The third part is about the work rol in human life and finally we study the different alternatives on activity and specific work contracts, proposals that nowadays replace the notion of "full employment" in France.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since ten years ago, it is intensely thought of human work and employment in developed countries as a consequence of the deep changes on the productive system. In this chapter we analyze at first place the concept of work and its evolution, based on recent researches. In second place we review the notions of activity, work and employment and also the definition and measurement of unemployment, which is not a very old notion. The third part is about the work rol in human life and finally we study the different alternatives on activity and specific work contracts, proposals that nowadays replace the notion of "full employment" in France.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since ten years ago, it is intensely thought of human work and employment in developed countries as a consequence of the deep changes on the productive system. In this chapter we analyze at first place the concept of work and its evolution, based on recent researches. In second place we review the notions of activity, work and employment and also the definition and measurement of unemployment, which is not a very old notion. The third part is about the work rol in human life and finally we study the different alternatives on activity and specific work contracts, proposals that nowadays replace the notion of "full employment" in France.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most significant aspects of a building’s acoustic behavior is the airborne sound insulation of the room façades, since this determines the protection of its inhabitants against environmental noise. For this reason, authorities in most countries have established in their acoustic regulations for buildings the minimum value of sound insulation that must be respected for façades. In order to verify compliance with legal requirements it is usual to perform acoustic measurements in the finished buildings and then compare the measurement results with the established limits. Since there is always a certain measurement uncertainty, this uncertainty must be calculated and taken into account in order to ensure compliance with specifications. The most commonly used method for measuring sound insulation on façades is the so-called Global Loudspeaker Method, specified in ISO 140-5:1998. This method uses a loudspeaker placed outside the building as a sound source. The loudspeaker directivity has a significant influence on the measurement results, and these results may change noticeably by choosing different loudspeakers, even though they all fulfill the directivity requirements of ISO 140-5. This work analyzes the influence of the loudspeaker directivity on the results of façade sound insulation measurement, and determines its contribution to measurement uncertainty. The theoretical analysis is experimentally validated by means of an intermediate precision test according to ISO 5725-3:1994, which compares the values of sound insulation obtained for a façade using various loudspeakers with different directivities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most significant aspects of a building?s acoustic behavior is the airborne sound insulation of the room façades, since this determines the protection of its inhabitants against environmental noise. For this reason, authorities in most countries have established in their acoustic regulations for buildings the minimum value of sound insulation that must be respected for façades. In order to verify compliance with legal requirements it is usual to perform acoustic measurements in the finished buildings and then compare the measurement results with the established limits. Since there is always a certain measurement uncertainty, this uncertainty must be calculated and taken into account in order to ensure compliance with specifications. The most commonly used method for measuring sound insulation on façades is the so-called Global Loudspeaker Method, specified in ISO 140-5:1998. This method uses a loudspeaker placed outside the building as a sound source. The loudspeaker directivity has a significant influence on the measurement results, and these results may change noticeably by choosing different loudspeakers, even though they all fulfill the directivity requirements of ISO 140-5. This work analyzes the influence of the loudspeaker directivity on the results of façade sound insulation measurement, and determines its contribution to measurement uncertainty. The theoretical analysis is experimentally validated by means of an intermediate precision test according to ISO 5725-3:1994, which compares the values of sound insulation obtained for a façade using various loudspeakers with different directivities. Keywords: Uncertainty, Façade, Insulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

So far, the majority of reports on on-line measurement considered soil properties with direct spectral responses in near infrared spectroscopy (NIRS). This work reports on the results of on-line measurement of soil properties with indirect spectral responses, e.g. pH, cation exchange capacity (CEC), exchangeable calcium (Caex) and exchangeable magnesium (Mgex) in one field in Bedfordshire in the UK. The on-line sensor consisted of a subsoiler coupled with an AgroSpec mobile, fibre type, visible and near infrared (vis–NIR) spectrophotometer (tec5 Technology for Spectroscopy, Germany), with a measurement range 305–2200 nm to acquire soil spectra in diffuse reflectance mode. General calibration models for the studied soil properties were developed with a partial least squares regression (PLSR) with one-leave-out cross validation, using spectra measured under non-mobile laboratory conditions of 160 soil samples collected from different fields in four farms in Europe, namely, Czech Republic, Denmark, Netherland and UK. A group of 25 samples independent from the calibration set was used as independent validation set. Higher accuracy was obtained for laboratory scanning as compared to on-line scanning of the 25 independent samples. The prediction accuracy for the laboratory and on-line measurements was classified as excellent/very good for pH (RPD = 2.69 and 2.14 and r2 = 0.86 and 0.78, respectively), and moderately good for CEC (RPD = 1.77 and 1.61 and r2 = 0.68 and 0.62, respectively) and Mgex (RPD = 1.72 and 1.49 and r2 = 0.66 and 0.67, respectively). For Caex, very good accuracy was calculated for laboratory method (RPD = 2.19 and r2 = 0.86), as compared to the poor accuracy reported for the on-line method (RPD = 1.30 and r2 = 0.61). The ability of collecting large number of data points per field area (about 12,800 point per 21 ha) and the simultaneous analysis of several soil properties without direct spectral response in the NIR range at relatively high operational speed and appreciable accuracy, encourage the recommendation of the on-line measurement system for site specific fertilisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the influence of both characteristics of the lens and misalignment of the incident beams on roughness measurement is presented. To investigate how the focal length and diameter affect the degree of correlation between the speckle patterns, a set of experiments with different lenses is performed. On the other hand, the roughness when the beams separated by an amount are non-coincident at the same point on the sample is measured. To conclude the study, the uncertainty of the method is calculated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accuracy in the liquid hydrocarbons custody transfer is mandatory because it has a great economic impact. By far the most accurate meter is the positive displacement (PD) meter. Increasing such an accuracy may adversely affect the cost of the custody transfer, unless simple models are developed in order to lower the cost, which is the purpose of this work. PD meter consists of a fixed volume rotating chamber. For each turn a pulse is counted, hence, the measured volume is the number of pulses times the volume of the chamber. It does not coincide with the real volume, so corrections have to be made. All the corrections are grouped by a meter factor. Among corrections highlights the slippage flow. By solving the Navier-Stokes equations one can find an analytical expression for this flow. It is neither easy nor cheap to apply straightforward the slippage correction; therefore we have made a simple model where slippage is regarded as a single parameter with dimension of time. The model has been tested for several PD meters. In our careful experiments, the meter factor grows with temperature at a constant pace of 8?10?5?ºC?1. Be warned

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we have realized plasma diagnosis produced by Laser (LPP), by means of emission spectroscopy in a Laser Shock Processing (LSP). The LSP has been proposed as an alternative technology, competitive with classical surface treatments. The ionic species present in the plasma together with electron density and its temperature provide significant indicators of the degree of surface effect of the treated material. In order to analyze these indicators, we have realized spectroscopic studies of optical emission in the laser-generated plasmas in different situations. We have worked focusing on an aluminum sample (Al2024) in air and/or in LSP conditions (water flow) a Q-switched laser of Nd:YAG (λ = 1.06 μm, 10 ns of pulse duration, running at 10 Hz repetition rate). The pulse energy was set at 2,5 J per pulse. The electron density has been measured using, in every case, the Stark broadening of H Balmer α line (656.27 nm). In the case of the air, this measure has been contrasted with the value obtained with the line of 281.62 nm of Al II. Special attention has been paid to the self-absorption of the spectral lines used. The measures were realized with different delay times after the pulse of the laser (1–8 μs) and with a time window of 1 μs. In LSP the electron density obtained was between 1017 cm−3 for the shortest delays (4–6 μs), and 1016 cm−3 for the greatest delays (7,8 μs).