817 resultados para Error of measurement
Resumo:
This technique paper describes a novel method for quantitatively and routinely identifying auroral breakup following substorm onset using the Time History of Events and Macroscale Interactions During Substorms (THEMIS) all-sky imagers (ASIs). Substorm onset is characterised by a brightening of the aurora that is followed by auroral poleward expansion and auroral breakup. This breakup can be identified by a sharp increase in the auroral intensity i(t) and the time derivative of auroral intensity i'(t). Utilising both i(t) and i'(t) we have developed an algorithm for identifying the time interval and spatial location of auroral breakup during the substorm expansion phase within the field of view of ASI data based solely on quantifiable characteristics of the optical auroral emissions. We compare the time interval determined by the algorithm to independently identified auroral onset times from three previously published studies. In each case the time interval determined by the algorithm is within error of the onset independently identified by the prior studies. We further show the utility of the algorithm by comparing the breakup intervals determined using the automated algorithm to an independent list of substorm onset times. We demonstrate that up to 50% of the breakup intervals characterised by the algorithm are within the uncertainty of the times identified in the independent list. The quantitative description and routine identification of an interval of auroral brightening during the substorm expansion phase provides a foundation for unbiased statistical analysis of the aurora to probe the physics of the auroral substorm as a new scientific tool for aiding the identification of the processes leading to auroral substorm onset.
Resumo:
The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.
Resumo:
The techno-economic performance of a small wind turbine is very sensitive to the available wind resource. However, due to financial and practical constraints installers rely on low resolution wind speed databases to assess a potential site. This study investigates whether the two site assessment tools currently used in the UK, NOABL or the Energy Saving Trust wind speed estimator, are accurate enough to estimate the techno-economic performance of a small wind turbine. Both the tools tend to overestimate the wind speed, with a mean error of 23% and 18% for the NOABL and Energy Saving Trust tool respectively. A techno-economic assessment of 33 small wind turbines at each site has shown that these errors can have a significant impact on the estimated load factor of an installation. Consequently, site/turbine combinations which are not economically viable can be predicted to be viable. Furthermore, both models tend to underestimate the wind resource at relatively high wind speed sites, this can lead to missed opportunities as economically viable turbine/site combinations are predicted to be non-viable. These results show that a better understanding of the local wind resource is a required to make small wind turbines a viable technology in the UK.
Resumo:
The Met Office 1km radar-derived precipitation-rate composite over 8 years (2006–2013) is examined to evaluate whether it provides an accurate representation of annual-average precipitation over Great Britain and Ireland over long periods of time. The annual-average precipitation from the radar composite is comparable with gauge measurements, with an average error of +23mmyr−1 over Great Britain and Ireland, +29mmyr−1 (3%) over the United Kingdom and –781mmyr−1 (46%) over the Republic of Ireland. The radar-derived precipitation composite is useful over the United Kingdom including Northern Ireland, but not accurate over the Republic of Ireland, particularly in the south.
Resumo:
Recent work in animals suggests that the extent of early tactile stimulation by parents of offspring is an important element in early caregiving. We evaluate the psychometric properties of a new parent-report measure designed to assess frequency of tactile stimulation across multiple caregiving domains in infancy. We describe the full item set of the Parent-Infant Caregiving Touch Scale (PICTS) and, using data from a UK longitudinal Child Health and Development Study, the response frequencies and factor structure and whether it was invariant over two time points in early development (5 and 9 weeks). When their infant was 9 weeks old, 838 mothers responded on the PICTS while a stratified subsample of 268 mothers completed PICTS at an earlier 5 week old assessment (229 responded on both occasions). Three PICTS factors were identified reflecting stroking, holding and affective communication. These were moderately to strongly correlated at each of the two time points of interest and were unrelated to, and therefore distinct from, a traditional measure of maternal sensitivity at 7-months. A wholly stable psychometry over 5 and 9-week assessments was not identified which suggests that behavior profiles differ slightly for younger and older infants. Tests of measurement invariance demonstrated that all three factors are characterized by full configural and metric invariance, as well as a moderate degree of evidence of scalar invariance for the stroking factor. We propose the PICTS as a valuable new measure of important aspects of caregiving in infancy.
Resumo:
The aim of this study was to assess and improve the accuracy of biotransfer models for the organic pollutants (PCBs, PCDD/Fs, PBDEs, PFCAs, and pesticides) into cow’s milk and beef used in human exposure assessment. Metabolic rate in cattle is known as a key parameter for this biotransfer, however few experimental data and no simulation methods are currently available. In this research, metabolic rate was estimated using existing QSAR biodegradation models of microorganisms (BioWIN) and fish (EPI-HL and IFS-HL). This simulated metabolic rate was then incorporated into the mechanistic cattle biotransfer models (RAIDAR, ACC-HUMAN, OMEGA, and CKow). The goodness of fit tests showed that RAIDAR, ACC-HUMAN, OMEGA model performances were significantly improved using either of the QSARs when comparing the new model outputs to observed data. The CKow model is the only one that separates the processes in the gut and liver. This model showed the lowest residual error of all the models tested when the BioWIN model was used to represent the ruminant metabolic process in the gut and the two fish QSARs were used to represent the metabolic process in the liver. Our testing included EUSES and CalTOX which are KOW-regression models that are widely used in regulatory assessment. New regressions based on the simulated rate of the two metabolic processes are also proposed as an alternative to KOW-regression models for a screening risk assessment. The modified CKow model is more physiologically realistic, but has equivalent usability to existing KOW-regression models for estimating cattle biotransfer of organic pollutants.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
The purpose of this study was to evaluate the influence of different light sources and photo-activation methods on degree of conversion (DC%) and polymerization shrinkage (PS) of a nanocomposite resin (Filtek (TM) Supreme XT, 3M/ESPE). Two light-curing units (LCUs), one halogen-lamp (QTH) and one light-emitting-diode (LED), and two different photo-activation methods (continuous and gradual) were investigated in this study. The specimens were divided in four groups: group 1-power density (PD) of 570 mW/cm(2) for 20 s (QTH); group 2-PD 0 at 570 mW/cm(2) for 10 s + 10 s at 570 mW/cm(2) (QTH); group 3-PD 860 mW/cm(2) for 20 s (LED), and group 4-PD 125 mW/cm(2) for 10 s + 10 s at 860 mW/cm(2) (LED). A testing machine EMIC with rectangular steel bases (6 x 1 x 2 mm) was used to record the polymerization shrinkage forces (MPa) for a period that started with the photo-activation and ended after two minutes of measurement. For each group, ten repetitions (n = 40) were performed. For DC% measurements, five specimens (n = 20) for each group were made in a metallic mold (2 mm thickness and 4 mm diameter, ISO 4049) and them pulverized, pressed with bromide potassium (KBr) and analyzed with FT-IR spectroscopy. The data of PS were analyzed by Analysis of Variance (ANOVA) with Welch`s correction and Tamhane`s test. The PS means (MPa) were: 0.60 (G1); 0.47 (G2); 0.52 (G3) and 0.45 (G4), showing significant differences between two photo-activation methods, regardless of the light source used. The continuous method provided the highest values for PS. The data of DC% were analyzed by Analysis of Variance (ANOVA) and shows significant differences for QTH LCUs, regardless of the photo-activation method used. The QTH provided the lowest values for DC%. The gradual method provides lower polymerization contraction, either with halogen lamp or LED. Degree of conversion (%) for continuous or gradual photo-activation method was influenced by the LCUs. Thus, the presented results suggest that gradual method photo-activation with LED LCU would suffice to ensure adequate degree of conversion and minimum polymerization shrinkage.
Resumo:
Films of amorphous aluminium nitride (AlN) were prepared by conventional radio frequency sputtering of an Al + Cr target in a plasma of pure nitrogen. The Cr-to-Al relative area determines the Cr content, which remained in the similar to 0-3.5 at% concentration range in this study. Film deposition was followed by thermal annealing of the samples up to 1050 degrees C in an atmosphere of oxygen and by spectroscopic characterization through energy dispersive x-ray spectrometry, photoluminescence and optical transmission measurements. According to the experimental results, the optical-electronic properties of the Cr-containing AlN films are highly influenced by both the Cr concentration and the temperature of the thermal treatments. In fact, thermal annealing at 1050 degrees C induces the development of structures that, because of their typical size and distinctive spectral characteristics, were designated by ruby microstructures (RbMSs). These RbMSs are surrounded by a N-rich environment in which Cr(3+) ions exhibit luminescent features not present in other Cr(3+)-containing systems such as ruby, emerald or alexandrite. The light emissions shown by the RbMSs and surroundings were investigated according to the Cr concentration and temperature of measurement, allowing the identification of several Cr(3+)-related luminescent lines. The main characteristics of these luminescent lines and corresponding excitation-recombination processes are presented and discussed in view of a detailed spectroscopic analysis.
Resumo:
Throughout the industrial processes of sheet metal manufacturing and refining, shear cutting is widely used for its speed and cost advantages over competing cutting methods. Industrial shears may include some force measurement possibilities, but the force is most likely influenced by friction losses between shear tool and the point of measurement, and are in general not showing the actual force applied to the sheet. Well defined shears and accurate measurements of force and shear tool position are important for understanding the influence of shear parameters. Accurate experimental data are also necessary for calibration of numerical shear models. Here, a dedicated laboratory set-up with well defined geometry and movement in the shear, and high measurability in terms of force and geometry is designed, built and verified. Parameters important to the shear process are studied with perturbation analysis techniques and requirements on input parameter accuracy are formulated to meet experimental output demands. Input parameters in shearing are mostly geometric parameters, but also material properties and contact conditions. Based on the accuracy requirements, a symmetric experiment with internal balancing of forces is constructed to avoid guides and corresponding friction losses. Finally, the experimental procedure is validated through shearing of a medium grade steel. With the obtained experimental set-up performance, force changes as result of changes in studied input parameters are distinguishable down to a level of 1%.
Resumo:
I start presenting an explicit solution to Taylorís (2001) model, in order to illustrate the link between the target interest rate and the overnight interest rate prevailing in the economy. Next, I use Vector Auto Regressions to shed some light on the evolution of key macroeconomic variables after the Central Bank of Brazil increases the target interest rate by 1%. Point estimates show a four-year accumulated output loss ranging from 0:04% (whole sample, 1980 : 1-2004 : 2; quarterly data) to 0:25% (Post-Real data only) with a Örst-year peak output response between 0:04% and 1:0%; respectively. Prices decline between 2% and 4% in a 4-year horizon. The accumulated output response is found to be between 3:5 and 6 times higher after the Real Plan than when the whole sample is considered. The 95% confidence bands obtained using bias-corrected bootstrap always include the null output response when the whole sample is used, but not when the data is restricted to the Post-Real period. Innovations to interest rates explain between 4:9% (whole sample) and 9:2% (post-Real sample) of the forecast error of GDP.
Resumo:
Apesar da pesquisa em confiança interorganizacional e sua relação com performance ter sido conduzida sob as perspectivas da Teoria de Custos de Transação, Teoria das Trocas Sociais e Canais de Marketing, três importantes lacunas na literatura requerem investigação. Primeiro, está em andamento um debate conceitual sobre a multi-dimensionalidade da confiança, e como ela deve ser operacionalizada e medida, e que se divide em três correntes de pensamento - um construto multidimensional definido por dimensões não dominantes, um construto baseado em duas dimensões dominantes (afetiva e calculativa), ou um construto unidimensional. Segundo, existe ambiguidade em como as dimensões da confiança são definidas, levando a artefatos de equivalência nas escalas e resultados contraditórios. Terceiro, as diferentes percepções que compradores e fornecedores podem ter em cada dimensão da confiança e seu impacto na performance logística ainda não estão claros. Esta pesquisa empírica examina a confiança nas relações entre compradores e fornecedores no setor de logística no Brasil, através de duas amostras e estudos independentes: um examina a percepção dos compradores e o outro examina a dos fornecedores. Em seguida, os dois estudos são comparados para determinar as diferentes perspectivas da confiança e as implicações na performance logística. A análise multivariada mostrou que a confiança parece estar presente nas relações interorganizacionais, e é a percepção do comprador que possui maior relação com a performance logística. Ao mesmo tempo, compradores percebem fornecedores de forma mais negativa nas dimensões mensuráveis (competência e performance), enquanto não foram encontradas diferenças nos aspectos sociais (honestidade e benevolência), o que pode ser resultado do ambiente e cultura pesquisados. As análises mostraram que, apesar da confiança poder ser definida como um construto multidimensional, ela deve ser operacionalizada como um construto unidimensional direcionado pela competência e credibilidade. Este estudo contribui para a prática sugerindo formas de aumentar a confiança interorganizacional para aumento da performance.
Resumo:
Cocaine is one of the most widespread illegal stimulants utilized by the human population throughout the world. The aim of this study was to establish the highest no-effect dose (HNED) of cocaine on the spontaneous locomotor activity (SLA) of horses in a behavior chamber, and thereby to determine the maximal acceptable threshold of the urinary drug concentration in horses. Twelve English thoroughbred mares received 0.02, 0.03, 0.04, 0.08 or 0.12 mg kg(-1) cocaine i.v. or saline solution (control). It was noted that doses above 0.04 mg kg(-1) induced a significant increase in SLA (P < 0.05, Tukey's test). No significant increase in SLA was seen in the mares that received 0.03 mg kg(-1), but the animals showed important behavioral changes that did not occur after the 0.02 mg kg(-1) dose. It was concluded that the HNED of cocaine for horses in a behavior chamber is 0.02 mg kg(-1). After injection of this dose in five horses, urine samples were collected at predetermined intervals through vesical catheterization. The concentrations of cocaine, norcocaine, benzoylecgonine and ecgonine methyl ester were quantified by liquid chromatography/electrospray ionization tandem mass spectrometry. Cocaine and norcocaine concentrations remained consistently below the level of detection. Benzoylecgonine reached a mean (+/- SEM) maximum concentration of 531.9 +/- 168.7 ng ml(-1) after 4 h, whereas ecgonine methyl ester peaked 2 h after injection at a concentration of 97.2 +/- 26.5 ng ml(-1). The maximum admissible concentration for cocaine and/or metabolites in the urine of horses is difficult to establish unequivocally because of the substantial individual variation in the drug elimination pattern observed in horses, which can be inferred by the large standard error of the means obtained. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.