915 resultados para measurement error model
Resumo:
In this work a new method is proposed of separated estimation for the ARMA spectral model based on the modified Yule-Walker equations and on the least squares method. The proposal of the new method consists of performing an AR filtering in the random process generated obtaining a new random estimate, which will reestimate the ARMA model parameters, given a better spectrum estimate. Some numerical examples will be presented in order to ilustrate the performance of the method proposed, which is evaluated by the relative error and the average variation coefficient.
Resumo:
We present a measurement of the top quark pair (tt̄) production cross section in pp̄ collisions at √s=1.96 TeV using events with two charged leptons in the final state. This analysis utilizes an integrated luminosity of 224-243 pb-1 collected with the DØ detector at the Fermilab Tevatron Collider. We observe 13 events in the e+e -, eμ and μ+μ- channels with an expected background of 3.2±0.7 events. For a top quark mass of 175 GeV, we measure a tt̄ production cross section of σtt̄=8. 6-2.7 +3.2(stat)±1.1(syst)±0.6(lumi) pb, consistent with the standard model prediction. © 2005 Elsevier B.V. All rights reserved.
Resumo:
The glued-laminated lumber (glulam) technique is an efficient process for making rational use of wood. Fiber-Reinforced Polymers (FRPs) associated with glulam beams provide significant gains in terms of strength and stiffness, and also alter the mode of rupture of these structural elements. In this context, this paper presents a theoretical model for designing reinforced glulam beams. The model allows for the calculation of the bending moment, the hypothetical distribution of linear strains along the height of the beam, and considers the wood has a linear elastic fragile behavior in tension parallel to the fibers and bilinear in compression parallel to the fibers, initially elastic and subsequently inelastic, with a negative decline in the stress-strain diagram. The stiffness was calculated by the transformed section method. Twelve non-reinforced and fiberglass reinforced glulam beams were evaluated experimentally to validate the proposed theoretical model. The results obtained indicate good congruence between the experimental and theoretical values.
Resumo:
Type II Bartter's syndrome is a hereditary hypokalemic renal salt-wasting disorder caused by mutations in the ROMK channel (Kir1.1; Kcnj1), mediating potassium recycling in the thick ascending limb of Henle's loop (TAL) and potassium secretion in the distal tubule and cortical collecting duct (CCT). Newborns with Type II Bartter are transiently hyperkalemic, consistent with loss of ROMK channel function in potassium secretion in distal convoluted tubule and CCT. Yet, these infants rapidly develop persistent hypokalemia owing to increased renal potassium excretion mediated by unknown mechanisms. Here, we used free-flow micropuncture and stationary microperfusion of the late distal tubule to explore the mechanism of renal potassium wasting in the Romk-deficient, Type II Bartter's mouse. We show that potassium absorption in the loop of Henle is reduced in Romk-deficient mice and can account for a significant fraction of renal potassium loss. In addition, we show that iberiotoxin (IBTX)-sensitive, flow-stimulated maxi-K channels account for sustained potassium secretion in the late distal tubule, despite loss of ROMK function. IBTX-sensitive potassium secretion is also increased in high-potassium-adapted wild-type mice. Thus, renal potassium wasting in Type II Bartter is due to both reduced reabsorption in the TAL and K secretion by max-K channels in the late distal tubule. © 2006 International Society of Nephrology.
Resumo:
A branch and bound algorithm is proposed to solve the [image omitted]-norm model reduction problem for continuous and discrete-time linear systems, with convergence to the global optimum in a finite time. The lower and upper bounds in the optimization procedure are described by linear matrix inequalities (LMI). Also proposed are two methods with which to reduce the convergence time of the branch and bound algorithm: the first one uses the Hankel singular values as a sufficient condition to stop the algorithm, providing to the method a fast convergence to the global optimum. The second one assumes that the reduced model is in the controllable or observable canonical form. The [image omitted]-norm of the error between the original model and the reduced model is considered. Examples illustrate the application of the proposed method.
Resumo:
In this paper, we consider the propagation of water waves in a long-wave asymptotic regime, when the bottom topography is periodic on a short length scale. We perform a multiscale asymptotic analysis of the full potential theory model and of a family of reduced Boussinesq systems parametrized by a free parameter that is the depth at which the velocity is evaluated. We obtain explicit expressions for the coefficients of the resulting effective Korteweg-de Vries (KdV) equations. We show that it is possible to choose the free parameter of the reduced model so as to match the KdV limits of the full and reduced models. Hence the reduced model is optimal regarding the embedded linear weakly dispersive and weakly nonlinear characteristics of the underlying physical problem, which has a microstructure. We also discuss the impact of the rough bottom on the effective wave propagation. In particular, nonlinearity is enhanced and we can distinguish two regimes depending on the period of the bottom where the dispersion is either enhanced or reduced compared to the flat bottom case. © 2007 The American Physical Society.
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
The effect of the ionosphere on the signals of Global Navigation Satellite Systems (GNSS), such as the Global Positionig System (GPS) and the proposed European Galileo, is dependent on the ionospheric electron density, given by its Total Electron Content (TEC). Ionospheric time-varying density irregularities may cause scintillations, which are fluctuations in phase and amplitude of the signals. Scintillations occur more often at equatorial and high latitudes. They can degrade navigation and positioning accuracy and may cause loss of signal tracking, disrupting safety-critical applications, such as marine navigation and civil aviation. This paper addresses the results of initial research carried out on two fronts that are relevant to GNSS users if they are to counter ionospheric scintillations, i.e. forecasting and mitigating their effects. On the forecasting front, the dynamics of scintillation occurrence were analysed during the severe ionospheric storm that took place on the evening of 30 October 2003, using data from a network of GPS Ionospheric Scintillation and TEC Monitor (GISTM) receivers set up in Northern Europe. Previous results [1] indicated that GPS scintillations in that region can originate from ionospheric plasma structures from the American sector. In this paper we describe experiments that enabled confirmation of those findings. On the mitigation front we used the variance of the output error of the GPS receiver DLL (Delay Locked Loop) to modify the least squares stochastic model applied by an ordinary receiver to compute position. This error was modelled according to [2], as a function of the S4 amplitude scintillation index measured by the GISTM receivers. An improvement of up to 21% in relative positioning accuracy was achieved with this technnique.
Resumo:
A previous work showed that viscosity values measured high frequency by ultrasound agreed with the values at low frequency by the rotational viscometer when conditions are met, such as relatively low frequency viscosity. However, these conditions strongly reduce the range of the measurement cell. In order to obtain a measurement range and sensitivity high frequency must used, but it causes a frequency-dependent decrease on the viscosity values. This work introduces a new simple in order to represent this frequency-dependent behavior.model is based on the Maxwell model for viscoelastic , but using a variable parameter. This parameter has physical meaning because it represents the linear behavior the apparent elasticity measured along with the viscosity by .Automotive oils SAE 90 and SAE 250 at 22.5±0.5oC viscosities at low frequency of 0.6 and 6.7 Pa.s, respectively,tested in the range of 1-5 MHz. The model was used in to fit the obtained data using an algorithm of non-linear in Matlab. By including the viscosity at low frequency an unknown fitting parameter, it is possible to extrapolate its . Relative deviations between the values measured by the and extrapolated using the model for the SAE 90 and SAE 250 oils were 5.0% and 15.7%, respectively.©2008 IEEE.
Resumo:
A voltage reference with low sensibility to temperature and power-supply that can generate flexible reference values (from milivolts to several volts) is proposed. Designed for AMS 0.35μm CMOS process, the circuit provides a stable output voltage working in the temperature range of -40-150°C. The proposed reference provides a nominal output voltage of 1.358V with a power-supply of 3.3V. © 2011 IEEE.
Resumo:
This paper proposes a new approach for optimal phasor measurement units placement for fault location on electric power distribution systems using Greedy Randomized Adaptive Search Procedure metaheuristic and Monte Carlo simulation. The optimized placement model herein proposed is a general methodology that can be used to place devices aiming to record the voltage sag magnitudes for any fault location algorithm that uses voltage information measured at a limited set of nodes along the feeder. An overhead, three-phase, three-wire, 13.8 kV, 134-node, real-life feeder model is used to evaluate the algorithm. Tests show that the results of the fault location methodology were improved thanks to the new optimized allocation of the meters pinpointed using this methodology. © 2011 IEEE.
Resumo:
When searching for prospective novel peptides, it is difficult to determine the biological activity of a peptide based only on its sequence. The trial and error approach is generally laborious, expensive and time consuming due to the large number of different experimental setups required to cover a reasonable number of biological assays. To simulate a virtual model for Hymenoptera insects, 166 peptides were selected from the venoms and hemolymphs of wasps, bees and ants and applied to a mathematical model of multivariate analysis, with nine different chemometric components: GRAVY, aliphaticity index, number of disulfide bonds, total residues, net charge, pI value, Boman index, percentage of alpha helix, and flexibility prediction. Principal component analysis (PCA) with non-linear iterative projections by alternating least-squares (NIPALS) algorithm was performed, without including any information about the biological activity of the peptides. This analysis permitted the grouping of peptides in a way that strongly correlated to the biological function of the peptides. Six different groupings were observed, which seemed to correspond to the following groups: chemotactic peptides, mastoparans, tachykinins, kinins, antibiotic peptides, and a group of long peptides with one or two disulfide bonds and with biological activities that are not yet clearly defined. The partial overlap between the mastoparans group and the chemotactic peptides, tachykinins, kinins and antibiotic peptides in the PCA score plot may be used to explain the frequent reports in the literature about the multifunctionality of some of these peptides. The mathematical model used in the present investigation can be used to predict the biological activities of novel peptides in this system, and it may also be easily applied to other biological systems. © 2011 Elsevier Inc.
Resumo:
Ionospheric scintillations can seriously jeopardize the reliability of the GNSS signals and consequently can cause significant error or outage on precise positioning applications. The threat is most acute at low latitudes where ionospheric irregularities are more likely to occur resulting in L-band signal scintillations. This paper describes the effort made to model the ionospheric scintillations over the Latin American region in the frame of the CIGALA project funded by the European GNSS Supervisory Authority within the 7th Framework Programme of the European Commission. Comparisons between the low-latitude model of scintillations and observations are here presented and discussed within the project perspectives. © 2011 IEEE.
Specialist tool for monitoring the measurement degradation process of induction active energy meters
Resumo:
This paper presents a methodology and a specialist tool for failure probability analysis of induction type watt-hour meters, considering the main variables related to their measurement degradation processes. The database of the metering park of a distribution company, named Elektro Electricity and Services Co., was used for determining the most relevant variables and to feed the data in the software. The modeling developed to calculate the watt-hour meters probability of failure was implemented in a tool through a user friendly platform, written in Delphi language. Among the main features of this tool are: analysis of probability of failure by risk range; geographical localization of the meters in the metering park, and automatic sampling of induction type watt-hour meters, based on a risk classification expert system, in order to obtain information to aid the management of these meters. The main goals of the specialist tool are following and managing the measurement degradation, maintenance and replacement processes for induction watt-hour meters. © 2011 IEEE.
Resumo:
The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.