900 resultados para technical error of measurement
Resumo:
Presented is an accurate swimming velocity estimation method using an inertial measurement unit (IMU) by employing a simple biomechanical constraint of motion along with Gaussian process regression to deal with sensor inherent errors. Experimental validation shows a velocity RMS error of 9.0 cm/s and high linear correlation when compared with a commercial tethered reference system. The results confirm the practicality of the presented method to estimate swimming velocity using a single low-cost, body-worn IMU.
Resumo:
The OLS estimator of the intergenerational earnings correlation is biased towards zero, while the instrumental variables estimator is biased upwards. The first of these results arises because of measurement error, while the latter rests on the presumption that the education of the parent family is an invalid instrument. We propose a panel data framework for quantifying the asymptotic biases of these estimators, as well as a mis-specification test for the IV estimator. [Author]
Resumo:
PURPOSE: Awareness of being monitored can influence participants' habitual physical activity (PA) behavior. This reactivity effect may threaten the validity of PA assessment. Reports on reactivity when measuring the PA of children and adolescents have been inconsistent. The aim of this study was to investigate whether PA outcomes measured by accelerometer devices differ from measurement day to measurement day and whether the day of the week and the day on which measurement started influence these differences. METHODS: Accelerometer data (counts per minute [cpm]) of children and adolescents (n = 2081) pooled from eight studies in Switzerland with at least 10 h of daily valid recording were investigated for effects of measurement day, day of the week, and start day using mixed linear regression. RESULTS: The first measurement day was the most active day. Counts per minute were significantly higher than on the second to the sixth day, but not on the seventh day. Differences in the age-adjusted means between the first and consecutive days ranged from 23 to 45 cpm (3.6%-7.1%). In preschoolchildren, the differences almost reached 10%. The start day significantly influenced PA outcome measures. CONCLUSIONS: Reactivity to accelerometer measurement of PA is likely to be present to an extent of approximately 5% on the first day and may introduce a relevant bias to accelerometer-based studies. In preschoolchildren, the effects are larger than those in elementary and secondary schoolchildren. As the day of the week and the start day significantly influence PA estimates, researchers should plan for at least one familiarization day in school-age children and randomly assign start days.
Resumo:
The evolution of continuous traits is the central component of comparative analyses in phylogenetics, and the comparison of alternative models of trait evolution has greatly improved our understanding of the mechanisms driving phenotypic differentiation. Several factors influence the comparison of models, and we explore the effects of random errors in trait measurement on the accuracy of model selection. We simulate trait data under a Brownian motion model (BM) and introduce different magnitudes of random measurement error. We then evaluate the resulting statistical support for this model against two alternative models: Ornstein-Uhlenbeck (OU) and accelerating/decelerating rates (ACDC). Our analyses show that even small measurement errors (10%) consistently bias model selection towards erroneous rejection of BM in favour of more parameter-rich models (most frequently the OU model). Fortunately, methods that explicitly incorporate measurement errors in phylogenetic analyses considerably improve the accuracy of model selection. Our results call for caution in interpreting the results of model selection in comparative analyses, especially when complex models garner only modest additional support. Importantly, as measurement errors occur in most trait data sets, we suggest that estimation of measurement errors should always be performed during comparative analysis to reduce chances of misidentification of evolutionary processes.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
"Technical challenges exist with infrastructure that can be addressed by nondestructive evaluation (NDE) methods, such as detecting corrosion damage to reinforcing steel that anchor concrete bridge railings to bridge road decks. Moisture and chloride ions reach the anchors along the cold joint between the rails and deck, causing corrosion that weakens the anchors and ultimately the barriers. The Center for Nondestructive Evaluation at Iowa State University has experience in development of measurement techniques and new sensors using a variety of interrogating energies. This research evaluated feasibility of three technologies — x-ray radiation, ground-penetrating radar (GPR), and magnetic flux leakage (MFL) — for detection and quantification of corrosion of embedded reinforcing steel. Controlled samples containing pristine reinforcing steel with and without epoxy and reinforcing steel with 25 percent and 50 percent section reduction were embedded in concrete at 2.5 in. deep for laboratory evaluation. Two of the techniques, GPR and MFL, were used in a limited field test on the Iowa Highway 210 Bridge over Interstate 35 in Story County. The methods provide useful and complementary information. GPR provides a rapid approach to identify reinforcing steel that has anomalous responses. MFL provides similar detection responses but could be optimized to provide more quantitative correlation to actual condition. Full implementation could use either GPR or MFL methods to identify areas of concern, followed by radiography to give a visual image of the actual condition, providing the final guidance for maintenance actions." The full 103 page report and the 2 page Tech Transfer Summary are included in this link.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
This study investigated the surface hardening of steels via experimental tests using a multi-kilowatt fiber laser as the laser source. The influence of laser power and laser power density on the hardening effect was investigated. The microhardness analysis of various laser hardened steels was done. A thermodynamic model was developed to evaluate the thermal process of the surface treatment of a wide thin steel plate with a Gaussian laser beam. The effect of laser linear oscillation hardening (LLOS) of steel was examined. An as-rolled ferritic-pearlitic steel and a tempered martensitic steel with 0.37 wt% C content were hardened under various laser power levels and laser power densities. The optimum power density that produced the maximum hardness was found to be dependent on the laser power. The effect of laser power density on the produced hardness was revealed. The surface hardness, hardened depth and required laser power density were compared between the samples. Fiber laser was briefly compared with high power diode laser in hardening medium-carbon steel. Microhardness (HV0.01) test was done on seven different laser hardened steels, including rolled steel, quenched and tempered steel, soft annealed alloyed steel and conventionally through-hardened steel consisting of different carbon and alloy contents. The surface hardness and hardened depth were compared among the samples. The effect of grain size on surface hardness of ferritic-pearlitic steel and pearlitic-cementite steel was evaluated. In-grain indentation was done to measure the hardness of pearlitic and cementite structures. The macrohardness of the base material was found to be related to the microhardness of the softer phase structure. The measured microhardness values were compared with the conventional macrohardness (HV5) results. A thermodynamic model was developed to calculate the temperature cycle, Ac1 and Ac3 boundaries, homogenization time and cooling rate. The equations were numerically solved with an error of less than 10-8. The temperature distributions for various thicknesses were compared under different laser traverse speed. The lag of the was verified by experiments done on six different steels. The calculated thermal cycle and hardened depth were compared with measured data. Correction coefficients were applied to the model for AISI 4340 steel. AISI 4340 steel was hardened by laser linear oscillation hardening (LLOS). Equations were derived to calculate the overlapped width of adjacent tracks and the number of overlapped scans in the center of the scanned track. The effect of oscillation frequency on the hardened depth was investigated by microscopic evaluation and hardness measurement. The homogeneity of hardness and hardened depth with different processing parameters were investigated. The hardness profiles were compared with the results obtained with conventional single-track hardening. LLOS was proved to be well suitable for surface hardening in a relatively large rectangular area with considerable depth of hardening. Compared with conventional single-track scanning, LLOS produced notably smaller hardened depths while at 40 and 100 Hz LLOS resulted in higher hardness within a depth of about 0.6 mm.
Resumo:
Abstract The digital cushion is characterized as a modified subcutaneous tissue that absorbs the shock during gait, assists venous return of the hoof and supports a considerable part of body weight. Digital cushions have particular importance in the pathogenesis of the hoof, since they need to properly work in order to prevent compression and traumas in soft tissues. This study aimed to measure and determine how is the arrangement of these structures, and for this it was established the proportions of connective, adipose, vascular tissues and collagen fibers and collagen types found in palmar and plantar digital cushion of bovine using fore and hindlimbs of twelve adult zebu cattle of both sexes, 11 male and one female, with 269kg average carcass weight and without limb disorders. Fragments of cushions were subjected to conventional histology, cut to a thickness of 4µm and stained with Red Picrosirius. With digital optical microscope, the quantification of the connective tissue and differentiation of types of collagen used the Image Pro Plus® software, and of adipose and vascular tissue, the test point system. The mean and standard error were estimated with the GraphPad Prism 5.0 software, and then data were subjected to Kolmogorov-Smirnov normality test and Student's t-test with significance level set at 5% for determining the amount of different tissues between fore and hindlimbs of studied animals. In forelimbs the mean and standard error of the connective tissue proportion was 50.10%+1.54, of the adipose tissue was 21.34%+1.44, and of vascular tissue was 3.43%+0.28. Hindlimbs presented a proportion of connective tissue of 61.61%+1.47, 20.66%+1.53 of adipose tissue, and 3.06%+0.20 of vascular tissue. A significant difference (p<0.001) was detected in the connective tissue proportion between fore and hindlimbs. Types I and II collagen fibers have presented, respectively, a proportion of 31.89% and 3.9% in forelimbs and 34.05% and 1.78% in hindlimbs. According to the used methodology, digital cushions had a clear differentiation relative to adipose tissue between fore and hindlimbs.
Resumo:
Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.
Resumo:
Smith-Lemli-Opitz syndrome (SLOS) is an autosomal recessive disorder due to an inborn error of cholesterol metabolism, characterized by congenital malformations, dysmorphism of multiple organs, mental retardation and delayed neuropsychomotor development resulting from cholesterol biosynthesis deficiency. A defect in 3ß-hydroxysteroid-delta7-reductase (delta7-sterol-reductase), responsible for the conversion of 7-dehydrocholesterol (7-DHC) to cholesterol, causes an increase in 7-DHC and frequently reduces plasma cholesterol levels. The clinical diagnosis of SLOS cannot always be conclusive because of the remarkable variability of clinical expression of the disorder. Thus, confirmation by the measurement of plasma 7-DHC levels is needed. In the present study, we used a simple, fast, and selective method based on ultraviolet spectrophotometry to measure 7-DHC in order to diagnose SLOS. 7-DHC was extracted serially from 200 µl plasma with ethanol and n-hexane and the absorbance at 234 and 282 nm was determined. The method was applied to negative control plasma samples from 23 normal individuals and from 6 cases of suspected SLOS. The method was adequate and reliable and 2 SLOS cases were diagnosed.
Resumo:
This research examines the concept of social entrepreneurship which is a fairly new business model. In the field of business it has become increasingly popular in recent years. The growing awareness of the environment and concrete examples of impact created by social entrepreneurship have encouraged entrepreneurs to address social problems. Society’s failures are tried to redress as a result of business activities. The purpose of doing business is necessarily no longer generating just profits but business is run in order to make a social change with the profit gained from the operations. Successful social entrepreneurship requires a specific nature, constant creativity and strong desire to make a social change. It requires constant balancing between two major objectives: both financial and non-financial issues need to be considered, but not at the expense of another. While aiming at the social purpose, the business needs to be run in highly competitive markets. Therefore, both factors need equally be integrated into an organization as they are complementary, not exclusionary. Business does not exist without society and society cannot go forward without business. Social entrepreneurship, its value creation, measurement tools and reporting practices are under discussion in this research. An extensive theoretical basis is covered and used to support the findings coming out of the researched case enterprises. The most attention is focused on the concept of Social Return on Investment. The case enterprises are analyzed through the SROI process. Social enterprises are mostly small or medium sized. Naturally this sets some limitations in implementing measurement tools. The question of resources requires the most attention and therefore sets the biggest constraints. However, the size of the company does not determine all – the nature of business and the type of social purpose need to be considered always. The mission may be so concrete and transparent that in all cases any kind of measurement would be useless. Implementing measurement tools may be of great benefit – or a huge financial burden. Thus, the very first thing to carefully consider is the possible need of measuring value creation.
Resumo:
We study the role of natural resource windfalls in explaining the efficiency of public expenditures. Using a rich dataset of expenditures and public good provision for 1,836 municipalities in Peru for period 2001-2010, we estimate a non-monotonic relationship between the efficiency of public good provision and the level of natural resource transfers. Local governments that were extremely favored by the boom of mineral prices were more efficient in using fiscal windfalls whereas those benefited with modest transfers were more inefficient. These results can be explained by the increase in political competition associated with the boom. However, the fact that increases in efficiency were related to reductions in public good provision casts doubts about the beneficial effects of political competition in promoting efficiency.
Resumo:
Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)
Resumo:
A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.