908 resultados para Error of measurement
Resumo:
A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.
Resumo:
Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Acute intermittent porphyria is an inborn error of haem synthesis which is transmitted as a dominant character with variable phenotypic expression. The disorder is caused by a partial deficiency of porphobilinogen deaminase in all tissues so far studied. The nature of the enzymatic deficiency of porphobilinogen deaminase in haemolysates from patients with acute intermittent porphyria was examined by the use of monospecific antibody probes. In affected heterozygotes from three British pedigrees of diverse ancestry, the catalytic deficiency of porphobilinogen deaminase was accompanied by diminished enzyme protein, as determined by radial immunodiffusion. No evidence of functionally attenuated enzyme was demonstrable by kinetic studies. The molecular forms of the residual enzyme were investigated in red cell extracts and in lysed preparations of reticulocytes by a sensitive Western blotting procedure. This revealed the presence of reduced amounts of porphobilinogen deaminase polypeptide co-migrating with wild type enzyme (Mr approximately 40,000), and no evidence of variant forms in situ. The studies show that porphobilinogen deaminase deficiency in acute intermittent porphyria is commonly associated with a CRM-phenotype. The residual activity under these circumstances is thus related to expression of a single normal allele, since sensitive techniques detected neither aberrant nor degraded forms of the enzyme in erythroid tissues.
Resumo:
This technique paper describes a novel method for quantitatively and routinely identifying auroral breakup following substorm onset using the Time History of Events and Macroscale Interactions During Substorms (THEMIS) all-sky imagers (ASIs). Substorm onset is characterised by a brightening of the aurora that is followed by auroral poleward expansion and auroral breakup. This breakup can be identified by a sharp increase in the auroral intensity i(t) and the time derivative of auroral intensity i'(t). Utilising both i(t) and i'(t) we have developed an algorithm for identifying the time interval and spatial location of auroral breakup during the substorm expansion phase within the field of view of ASI data based solely on quantifiable characteristics of the optical auroral emissions. We compare the time interval determined by the algorithm to independently identified auroral onset times from three previously published studies. In each case the time interval determined by the algorithm is within error of the onset independently identified by the prior studies. We further show the utility of the algorithm by comparing the breakup intervals determined using the automated algorithm to an independent list of substorm onset times. We demonstrate that up to 50% of the breakup intervals characterised by the algorithm are within the uncertainty of the times identified in the independent list. The quantitative description and routine identification of an interval of auroral brightening during the substorm expansion phase provides a foundation for unbiased statistical analysis of the aurora to probe the physics of the auroral substorm as a new scientific tool for aiding the identification of the processes leading to auroral substorm onset.
Resumo:
The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.
Resumo:
The techno-economic performance of a small wind turbine is very sensitive to the available wind resource. However, due to financial and practical constraints installers rely on low resolution wind speed databases to assess a potential site. This study investigates whether the two site assessment tools currently used in the UK, NOABL or the Energy Saving Trust wind speed estimator, are accurate enough to estimate the techno-economic performance of a small wind turbine. Both the tools tend to overestimate the wind speed, with a mean error of 23% and 18% for the NOABL and Energy Saving Trust tool respectively. A techno-economic assessment of 33 small wind turbines at each site has shown that these errors can have a significant impact on the estimated load factor of an installation. Consequently, site/turbine combinations which are not economically viable can be predicted to be viable. Furthermore, both models tend to underestimate the wind resource at relatively high wind speed sites, this can lead to missed opportunities as economically viable turbine/site combinations are predicted to be non-viable. These results show that a better understanding of the local wind resource is a required to make small wind turbines a viable technology in the UK.
Resumo:
The Met Office 1km radar-derived precipitation-rate composite over 8 years (2006–2013) is examined to evaluate whether it provides an accurate representation of annual-average precipitation over Great Britain and Ireland over long periods of time. The annual-average precipitation from the radar composite is comparable with gauge measurements, with an average error of +23mmyr−1 over Great Britain and Ireland, +29mmyr−1 (3%) over the United Kingdom and –781mmyr−1 (46%) over the Republic of Ireland. The radar-derived precipitation composite is useful over the United Kingdom including Northern Ireland, but not accurate over the Republic of Ireland, particularly in the south.
Resumo:
Recent work in animals suggests that the extent of early tactile stimulation by parents of offspring is an important element in early caregiving. We evaluate the psychometric properties of a new parent-report measure designed to assess frequency of tactile stimulation across multiple caregiving domains in infancy. We describe the full item set of the Parent-Infant Caregiving Touch Scale (PICTS) and, using data from a UK longitudinal Child Health and Development Study, the response frequencies and factor structure and whether it was invariant over two time points in early development (5 and 9 weeks). When their infant was 9 weeks old, 838 mothers responded on the PICTS while a stratified subsample of 268 mothers completed PICTS at an earlier 5 week old assessment (229 responded on both occasions). Three PICTS factors were identified reflecting stroking, holding and affective communication. These were moderately to strongly correlated at each of the two time points of interest and were unrelated to, and therefore distinct from, a traditional measure of maternal sensitivity at 7-months. A wholly stable psychometry over 5 and 9-week assessments was not identified which suggests that behavior profiles differ slightly for younger and older infants. Tests of measurement invariance demonstrated that all three factors are characterized by full configural and metric invariance, as well as a moderate degree of evidence of scalar invariance for the stroking factor. We propose the PICTS as a valuable new measure of important aspects of caregiving in infancy.
Resumo:
The aim of this study was to assess and improve the accuracy of biotransfer models for the organic pollutants (PCBs, PCDD/Fs, PBDEs, PFCAs, and pesticides) into cow’s milk and beef used in human exposure assessment. Metabolic rate in cattle is known as a key parameter for this biotransfer, however few experimental data and no simulation methods are currently available. In this research, metabolic rate was estimated using existing QSAR biodegradation models of microorganisms (BioWIN) and fish (EPI-HL and IFS-HL). This simulated metabolic rate was then incorporated into the mechanistic cattle biotransfer models (RAIDAR, ACC-HUMAN, OMEGA, and CKow). The goodness of fit tests showed that RAIDAR, ACC-HUMAN, OMEGA model performances were significantly improved using either of the QSARs when comparing the new model outputs to observed data. The CKow model is the only one that separates the processes in the gut and liver. This model showed the lowest residual error of all the models tested when the BioWIN model was used to represent the ruminant metabolic process in the gut and the two fish QSARs were used to represent the metabolic process in the liver. Our testing included EUSES and CalTOX which are KOW-regression models that are widely used in regulatory assessment. New regressions based on the simulated rate of the two metabolic processes are also proposed as an alternative to KOW-regression models for a screening risk assessment. The modified CKow model is more physiologically realistic, but has equivalent usability to existing KOW-regression models for estimating cattle biotransfer of organic pollutants.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
The purpose of this study was to evaluate the influence of different light sources and photo-activation methods on degree of conversion (DC%) and polymerization shrinkage (PS) of a nanocomposite resin (Filtek (TM) Supreme XT, 3M/ESPE). Two light-curing units (LCUs), one halogen-lamp (QTH) and one light-emitting-diode (LED), and two different photo-activation methods (continuous and gradual) were investigated in this study. The specimens were divided in four groups: group 1-power density (PD) of 570 mW/cm(2) for 20 s (QTH); group 2-PD 0 at 570 mW/cm(2) for 10 s + 10 s at 570 mW/cm(2) (QTH); group 3-PD 860 mW/cm(2) for 20 s (LED), and group 4-PD 125 mW/cm(2) for 10 s + 10 s at 860 mW/cm(2) (LED). A testing machine EMIC with rectangular steel bases (6 x 1 x 2 mm) was used to record the polymerization shrinkage forces (MPa) for a period that started with the photo-activation and ended after two minutes of measurement. For each group, ten repetitions (n = 40) were performed. For DC% measurements, five specimens (n = 20) for each group were made in a metallic mold (2 mm thickness and 4 mm diameter, ISO 4049) and them pulverized, pressed with bromide potassium (KBr) and analyzed with FT-IR spectroscopy. The data of PS were analyzed by Analysis of Variance (ANOVA) with Welch`s correction and Tamhane`s test. The PS means (MPa) were: 0.60 (G1); 0.47 (G2); 0.52 (G3) and 0.45 (G4), showing significant differences between two photo-activation methods, regardless of the light source used. The continuous method provided the highest values for PS. The data of DC% were analyzed by Analysis of Variance (ANOVA) and shows significant differences for QTH LCUs, regardless of the photo-activation method used. The QTH provided the lowest values for DC%. The gradual method provides lower polymerization contraction, either with halogen lamp or LED. Degree of conversion (%) for continuous or gradual photo-activation method was influenced by the LCUs. Thus, the presented results suggest that gradual method photo-activation with LED LCU would suffice to ensure adequate degree of conversion and minimum polymerization shrinkage.
Resumo:
Films of amorphous aluminium nitride (AlN) were prepared by conventional radio frequency sputtering of an Al + Cr target in a plasma of pure nitrogen. The Cr-to-Al relative area determines the Cr content, which remained in the similar to 0-3.5 at% concentration range in this study. Film deposition was followed by thermal annealing of the samples up to 1050 degrees C in an atmosphere of oxygen and by spectroscopic characterization through energy dispersive x-ray spectrometry, photoluminescence and optical transmission measurements. According to the experimental results, the optical-electronic properties of the Cr-containing AlN films are highly influenced by both the Cr concentration and the temperature of the thermal treatments. In fact, thermal annealing at 1050 degrees C induces the development of structures that, because of their typical size and distinctive spectral characteristics, were designated by ruby microstructures (RbMSs). These RbMSs are surrounded by a N-rich environment in which Cr(3+) ions exhibit luminescent features not present in other Cr(3+)-containing systems such as ruby, emerald or alexandrite. The light emissions shown by the RbMSs and surroundings were investigated according to the Cr concentration and temperature of measurement, allowing the identification of several Cr(3+)-related luminescent lines. The main characteristics of these luminescent lines and corresponding excitation-recombination processes are presented and discussed in view of a detailed spectroscopic analysis.
Resumo:
Throughout the industrial processes of sheet metal manufacturing and refining, shear cutting is widely used for its speed and cost advantages over competing cutting methods. Industrial shears may include some force measurement possibilities, but the force is most likely influenced by friction losses between shear tool and the point of measurement, and are in general not showing the actual force applied to the sheet. Well defined shears and accurate measurements of force and shear tool position are important for understanding the influence of shear parameters. Accurate experimental data are also necessary for calibration of numerical shear models. Here, a dedicated laboratory set-up with well defined geometry and movement in the shear, and high measurability in terms of force and geometry is designed, built and verified. Parameters important to the shear process are studied with perturbation analysis techniques and requirements on input parameter accuracy are formulated to meet experimental output demands. Input parameters in shearing are mostly geometric parameters, but also material properties and contact conditions. Based on the accuracy requirements, a symmetric experiment with internal balancing of forces is constructed to avoid guides and corresponding friction losses. Finally, the experimental procedure is validated through shearing of a medium grade steel. With the obtained experimental set-up performance, force changes as result of changes in studied input parameters are distinguishable down to a level of 1%.
Resumo:
I start presenting an explicit solution to Taylorís (2001) model, in order to illustrate the link between the target interest rate and the overnight interest rate prevailing in the economy. Next, I use Vector Auto Regressions to shed some light on the evolution of key macroeconomic variables after the Central Bank of Brazil increases the target interest rate by 1%. Point estimates show a four-year accumulated output loss ranging from 0:04% (whole sample, 1980 : 1-2004 : 2; quarterly data) to 0:25% (Post-Real data only) with a Örst-year peak output response between 0:04% and 1:0%; respectively. Prices decline between 2% and 4% in a 4-year horizon. The accumulated output response is found to be between 3:5 and 6 times higher after the Real Plan than when the whole sample is considered. The 95% confidence bands obtained using bias-corrected bootstrap always include the null output response when the whole sample is used, but not when the data is restricted to the Post-Real period. Innovations to interest rates explain between 4:9% (whole sample) and 9:2% (post-Real sample) of the forecast error of GDP.