939 resultados para Measurement error models
Resumo:
The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).
Resumo:
PHENIX has measured the e(+)e(-) pair continuum in root s(NN) = 200 GeV Au+Au and p+p collisions over a wide range of mass and transverse momenta. The e(+)e(-) yield is compared to the expectations from hadronic sources, based on PHENIX measurements. In the intermediate-mass region, between the masses of the phi and the J/psi meson, the yield is consistent with expectations from correlated c (c) over bar production, although other mechanisms are not ruled out. In the low-mass region, below the phi, the p+p inclusive mass spectrum is well described by known contributions from light meson decays. In contrast, the Au+Au minimum bias inclusive mass spectrum in this region shows an enhancement by a factor of 4.7 +/- 0.4(stat) +/- 1.5(syst) +/- 0.9(model). At low mass (m(ee) < 0.3 GeV/c(2)) and high p(T) (1 < p(T) < 5 GeV/c) an enhanced e(+)e(-) pair yield is observed that is consistent with production of virtual direct photons. This excess is used to infer the yield of real direct photons. In central Au+Au collisions, the excess of the direct photon yield over the p+p is exponential in p(T), with inverse slope T = 221 +/- 19(stat) +/- 19(syst) MeV. Hydrodynamical models with initial temperatures ranging from T(init) similar or equal to 300-600 MeV at times of 0.6-0.15 fm/c after the collision are in qualitative agreement with the direct photon data in Au+Au. For low p(T) < 1 GeV/c the low-mass region shows a further significant enhancement that increases with centrality and has an inverse slope of T similar or equal to 100 MeV. Theoretical models underpredict the low-mass, low-p(T) enhancement.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
High precision measurements of the differential cross sections for pi(0) photoproduction at forward angles for two nuclei, (12)C and (208)Pb, have been performed for incident photon energies of 4.9-5.5 GeV to extract the pi(0) -> gamma gamma decay width. The experiment was done at Jefferson Lab using the Hall B photon tagger and a high-resolution multichannel calorimeter. The pi(0) -> gamma gamma decay width was extracted by fitting the measured cross sections using recently updated theoretical models for the process. The resulting value for the decay width is Gamma(pi(0) -> gamma gamma) = 7.82 +/- 0.14(stat) +/- 0.17(syst) eV. With the 2.8% total uncertainty, this result is a factor of 2.5 more precise than the current Particle Data Group average of this fundamental quantity, and it is consistent with current theoretical predictions.
Resumo:
This paper describes a new and simple method to determine the molecular weight of proteins in dilute solution, with an error smaller than similar to 10%, by using the experimental data of a single small-angle X-ray scattering (SAXS) curve measured on a relative scale. This procedure does not require the measurement of SAXS intensity on an absolute scale and does not involve a comparison with another SAXS curve determined from a known standard protein. The proposed procedure can be applied to monodisperse systems of proteins in dilute solution, either in monomeric or multimeric state, and it has been successfully tested on SAXS data experimentally determined for proteins with known molecular weights. It is shown here that the molecular weights determined by this procedure deviate from the known values by less than 10% in each case and the average error for the test set of 21 proteins was 5.3%. Importantly, this method allows for an unambiguous determination of the multimeric state of proteins with known molecular weights.
Resumo:
Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.
Resumo:
Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.
Resumo:
Distribution of timing signals is an essential factor for the development of digital systems for telecommunication networks, integrated circuits and manufacturing automation. Originally, this distribution was implemented by using the master-slave architecture with a precise master clock generator sending signals to phase-locked loops (PLL) working as slave oscillators. Nowadays, wireless networks with dynamical connectivity and the increase in size and operation frequency of the integrated circuits suggest that the distribution of clock signals could be more efficient if mutually connected architectures were used. Here, mutually connected PLL networks are studied and conditions for synchronous states existence are analytically derived, depending on individual node parameters and network connectivity, considering that the nodes are nonlinear oscillators with nonlinear coupling conditions. An expression for the network synchronisation frequency is obtained. The lock-in range and the transmission error bounds are analysed providing hints to the design of this kind of clock distribution system.
Resumo:
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 mu m). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B(50) and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we compare three residuals to assess departures from the error assumptions as well as to detect outlying observations in log-Burr XII regression models with censored observations. These residuals can also be used for the log-logistic regression model, which is a special case of the log-Burr XII regression model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the modified martingale-type residual in log-Burr XII regression models with censored data.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved