944 resultados para measurement accuracy
Resumo:
结构光视觉传感器是视觉焊缝跟踪系统获得焊缝信息的重要组成部分,其测量误差与性能对焊缝跟踪系统的总体测量精度及可靠性有着直接影响。本文对应用于焊缝跟踪的结构光视觉传感器进行误差分析,包括传感器硬件系统结构误差、激光散斑噪声误差及镜头畸变误差等,并对不同结构方式下的视觉传感器建立了数学模型,具体分析了结构参数对其误差的影响,提出结构光视觉焊缝跟踪传感器优化设计方法,并依据仿真结果给出结构优化设计参数,最后通过实验验证了该优化设计方法的正确性。
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
A central question in community ecology is how the number of trophic links relates to community species richness. For simple dynamical food-web models, link density (the ratio of links to species) is bounded from above as the number of species increases; but empirical data suggest that it increases without bounds. We found a new empirical upper bound on link density in large marine communities with emphasis on fish and squid, using novel methods that avoid known sources of bias in traditional approaches. Bounds are expressed in terms of the diet-partitioning function (DPF): the average number of resources contributing more than a fraction f to a consumer's diet, as a function of f. All observed DPF follow a functional form closely related to a power law, with power-law exponents indepen- dent of species richness at the measurement accuracy. Results imply universal upper bounds on link density across the oceans. However, the inherently scale-free nature of power-law diet partitioning suggests that the DPF itself is a better defined characterization of network structure than link density.
Resumo:
This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.
Resumo:
This paper describes the design and manufacture of the filters and antireflection coatings used in the HIRDLS instrument. The multilayer design of the filters and coatings, choice of layer materials, and the deposition techniques adopted to ensure adequate layer thickness control is discussed. The spectral assessment of the filters and coatings is carried out using a FTIR spectrometer; some measurement results are presented together with discussion of measurement accuracy and the identification and avoidance of measurement artifacts. The post-deposition processing of the filters by sawing to size, writing of an identification code onto the coatings and the environmental testing of the finished filters are also described.
Resumo:
With a wide range of applications benefiting from dense network air temperature observations but with limitations of costs, existing siting guidelines and risk of damage to sensors, new methods are required to gain a high resolution understanding of the spatio-temporal patterns of urban meteorological phenomena such as the urban heat island or precision farming needs. With the launch of a new generation of low cost sensors it is possible to deploy a network to monitor air temperature at finer spatial resolutions. Here we investigate the Aginova Sentinel Micro (ASM) sensor with a bespoke radiation shield (together < US$150) which can provide secure near-real-time air temperature data to a server utilising existing (or user deployed) Wireless Fidelity (Wi-Fi) networks. This makes it ideally suited for deployment where wireless communications readily exist, notably urban areas. Assessment of the performance of the ASM relative to traceable standards in a water bath and atmospheric chamber show it to have good measurement accuracy with mean errors < ± 0.22 °C between -25 and 30 °C, with a time constant in ambient air of 110 ± 15 s. Subsequent field tests of it within the bespoke shield also had excellent performance (root-mean-square error = 0.13 °C) over a range of meteorological conditions relative to a traceable operational UK Met Office platinum resistance thermometer. These results indicate that the ASM and bespoke shield are more than fit-for-purpose for dense network deployment in urban areas at relatively low cost compared to existing observation techniques.
Resumo:
Stalagmites are natural archives containing detailed information on continental climate variability of the past. Microthermometric measurements of fluid inclusion homogenisation temperatures allow determination of stalagmite formation temperatures by measuring the radius of stable laser-induced vapour bubbles inside the inclusions. A reliable method for precisely measuring the radius of vapour bubbles is presented. The method is applied to stalagmite samples for which the formation temperature is known. An assessment of the bubble radius measurement accuracy and how this error influences the uncertainty in determining the formation temperature is provided. We demonstrate that the nominal homogenisation temperature of a single inclusion can be determined with an accuracy of ±0.25 °C, if the volume of the inclusion is larger than 105 μm3. With this method, we could measure in a proof-of-principle investigation that the formation temperature of 10–20 yr old inclusions in a stalagmite taken from the Milandre cave is 9.87 ± 0.80 °C, while the mean annual surface temperature, that in the case of the Milandre cave correlates well with the cave temperature, was 9.6 ± 0.15 °C, calculated from actual measurements at that time, showing a very good agreement. Formation temperatures of inclusions formed during the last 450 yr are found in a temperature range between 8.4 and 9.6 °C, which corresponds to the calculated average surface temperature. Paleotemperatures can thus be determined within ±1.0 °C.
Resumo:
Smart microgrids offer a new challenging domain for power theories and metering techniques because they include a variety of intermittent power sources which positively impact on power flow and distribution losses but may cause voltage asymmetry and frequency variation. In smart microgrids, the voltage distortion and asymmetry in presence of poly-phase nonlinear loads can be also greater than in usual distribution lines fed by the utility, thus affecting measurement accuracy and possibly causing tripping of protections. In such a context, a reconsideration of power theories is required since they form the basis for supply and load characterization. A revision of revenue metering techniques is also suggested to ensure a correct penalization of the loads for their responsibility in generating reactive power, voltage asymmetry, and distortion. This paper shows that the conservative power theory provides a suitable background to cope with smart grids characterization and metering needs. Simulation and experimental results show the properties of the proposed approach.
Resumo:
Smart micro-grids offer a new challenging domain for power theories and metering techniques, because they include a variety of intermittent power sources which positively impact on power flow and distribution losses, but may cause voltage asymmetry and frequency variation. Due to the limited power capability of smart micro-grids, the voltage distortion can also get worse (in case of supplying non-linear loads), affecting measurement accuracy and possibly causing tripping of protections. In such a context, a reconsideration of power theories is required, since they form the basis for supply and load characterization. A revision of revenue metering techniques is also needed, to ensure a correct penalization of the loads for their responsibility in generating reactive power, voltage unbalance and distortion. This paper shows that the Conservative Power Theory (CPT) provides a suitable background to cope with smart grids characterization and metering needs. Experimental results validate the proposed approach. © 2010 IEEE.
Resumo:
ABSTRACT Background: Patients with dementia may be unable to describe their symptoms, and caregivers frequently suffer emotional burden that can interfere with judgment of the patient's behavior. The Neuropsychiatric Inventory-Clinician rating scale (NPI-C) was therefore developed as a comprehensive and versatile instrument to assess and accurately measure neuropsychiatric symptoms (NPS) in dementia, thereby using information from caregiver and patient interviews, and any other relevant available data. The present study is a follow-up to the original, cross-national NPI-C validation, evaluating the reliability and concurrent validity of the NPI-C in quantifying psychopathological symptoms in dementia in a large Brazilian cohort. Methods: Two blinded raters evaluated 312 participants (156 patient-knowledgeable informant dyads) using the NPI-C for a total of 624 observations in five Brazilian centers. Inter-rater reliability was determined through intraclass correlation coefficients for the NPI-C domains and the traditional NPI. Convergent validity included correlations of specific domains of the NPI-C with the Brief Psychiatric Rating Scale (BPRS), the Cohen-Mansfield Agitation Index (CMAI), the Cornell Scale for Depression in Dementia (CSDD), and the Apathy Inventory (AI). Results: Inter-rater reliability was strong for all NPI-C domains. There were high correlations between NPI-C/delusions and BPRS, NPI-C/apathy-indifference with the AI, NPI-C/depression-dysphoria with the CSDD, NPI-C/agitation with the CMAI, and NPI-C/aggression with the CMAI. There was moderate correlation between the NPI-C/aberrant vocalizations and CMAI and the NPI-C/hallucinations with the BPRS. Conclusion: The NPI-C is a comprehensive tool that provides accurate measurement of NPS in dementia with high concurrent validity and inter-rater reliability in the Brazilian setting. In addition to universal assessment, the NPI-C can be completed by individual domains. © International Psychogeriatric Association 2013.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Pós-graduação em Fisiopatologia em Clínica Médica - FMB
Resumo:
The objective of this study is to evaluate the measurement accuracy of endodontic files obtained by digital and conventional radiographies in primary teeth. Kerr and Hedströen files (# 20), with the reference as the apparent length of tooth, were inserted in the root canal of 18 extracted primary teeth, which were x-rayed by digital and conventional techniques. Measurements from a reference point to the apical end were carried out by an experienced operator twice in a week. An electronic ruler was used for the digital method and a caliper was used for the conventional method. The data were subjected to Pearson correlation test and Student´s t test (p = 0.05). The correlation between the first and the second measurements was r = 0.99, regardless the type of file and method. Comparing the measurements within the methods, the agreement was r = 0.96 for Kerr and r = 0.95 for Hedströen files. The values of length files in the digital radiographies were statistically lower than that obtained in the conventional radiographies (p = 0.02). However, the values obtained by the two methods were statistically similar to real length of teeth for Kerr files (p = 0.29) and for Hedströen files (p = 0.18). The digital radiography was a more trustful method to obtain the lengths of endodontic files.
Resumo:
This paper presents a proposal for the automation of the camera calibration process, locating and measuring image points in coded targets with sub-pixel precision. This automatic technique helps minimize localization errors, regardless of camera orientation and image scale. To develop this technique, several types of coded targets were analyzed and the ARUCO type was chosen due to its simplicity, ability to represent up to 1024 different targets and availability of source code implemented with the OpenCV library. ARUCO targets were generated and two calibration sheets were assembled to be used for the acquisition of images for camera calibration. The developed software can locate targets in the acquired images and it automatically extracts the coordinates of the four corners with sub-pixel accuracy. Experiments were conducted with real data showing that the targets are correctly identified unless excessive noise or fragmentation occurs mainly in the outer target square. The results with the calibration of a low cost camera showed that the process works and that the measurement accuracy of the corners achieves sub-pixel precision.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)