152 resultados para Measurement Error Estimation
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
We show that measurements of finite duration performed on an open two-state system can protect the initial state from a phase-noisy environment, provided the measured observable does not commute with the perturbing interaction. When the measured observable commutes with the environmental interaction, the finite-duration measurement accelerates the rate of decoherence induced by the phase noise. For the description of the measurement of an observable that is incompatible with the interaction between system and environment, we have found an approximate analytical expression, valid at zero temperature and weak coupling with the measuring device. We have tested the validity of the analytical predictions against an exact numerical approach, based on the superoperator-splitting method, that confirms the protection of the initial state of the system. When the coupling between the system and the measuring apparatus increases beyond the range of validity of the analytical approximation, the initial state is still protected by the finite-time measurement, according with the exact numerical calculations.
Resumo:
The contribution of the detector dynamics to the weak measurement is analyzed. According to the usual theory [Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988)] the outcome of a weak measurement with preselection and postselection can be expressed as the real part of a complex number: the weak value. By accounting for the Hamiltonian evolution of the detector, here we find that there is a contribution proportional to the imaginary part of the weak value to the outcome of the weak measurement. This is due to the coherence of the probe being essential for the concept of complex weak value to be meaningful. As a particular example, we consider the measurement of a spin component and find that the contribution of the imaginary part of the weak value is sizable.
Resumo:
We describe the measurement of the depth of maximum, X(max), of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10(18) eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106 +/- 35-21) g/cm(2)/decade below 10(18.24) +/- (0.05) eV, and d24 +/- 3 g/cm(2)/ecade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm(2). The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.
Resumo:
The reverse engineering problem addressed in the present research consists of estimating the thicknesses and the optical constants of two thin films deposited on a transparent substrate using only transmittance data through the whole stack. No functional dispersion relation assumptions are made on the complex refractive index. Instead, minimal physical constraints are employed, as in previous works of some of the authors where only one film was considered in the retrieval algorithm. To our knowledge this is the first report on the retrieval of the optical constants and the thickness of multiple film structures using only transmittance data that does not make use of dispersion relations. The same methodology may be used if the available data correspond to normal reflectance. The software used in this work is freely available through the PUMA Project web page (http://www.ime.usp.br/similar to egbirgin/puma/). (C) 2008 Optical Society of America
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
The effect of conversion from forest-to-pasture upon soil carbon stocks has been intensively discussed, but few studies focus on how this land-use change affects carbon (C) distribution across soil fractions in the Amazon basin. We investigated this in the 20 cm depth along a chronosequence of sites from native forest to three successively older pastures. We performed a physicochemical fractionation of bulk soil samples to better understand the mechanisms by which soil C is stabilized and evaluate the contribution of each C fraction to total soil C. Additionally, we used a two-pool model to estimate the mean residence time (MRT) for the slow and active pool C in each fraction. Soil C increased with conversion from forest-to-pasture in the particulate organic matter (> 250 mu m), microaggregate (53-250 mu m), and d-clay (< 2 mu m) fractions. The microaggregate comprised the highest soil C content after the conversion from forest-to-pasture. The C content of the d-silt fraction decreased with time since conversion to pasture. Forest-derived C remained in all fractions with the highest concentration in the finest fractions, with the largest proportion of forest-derived soil C associated with clay minerals. Results from this work indicate that microaggregate formation is sensitive to changes in management and might serve as an indicator for management-induced soil carbon changes, and the soil C changes in the fractions are dependent on soil texture.
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
Soils are an important component in the biogeochemical cycle of carbon, storing about four times more carbon than biomass plants and nearly three times more than the atmosphere. Moreover, the carbon content is directly related on the capacity of water retention, fertility. among other properties. Thus, soil carbon quantification in field conditions is an important challenge related to carbon cycle and global climatic changes. Nowadays. Laser Induced Breakdown Spectroscopy (LIBS) can be used for qualitative elemental analyses without previous treatment of samples and the results are obtained quickly. New optical technologies made possible the portable LIBS systems and now, the great expectation is the development of methods that make possible quantitative measurements with LIBS. The goal of this work is to calibrate a portable LIBS system to carry out quantitative measures of carbon in whole tropical soil sample. For this, six samples from the Brazilian Cerrado region (Argisoil) were used. Tropical soils have large amounts of iron in their compositions, so the carbon line at 247.86 nm presents strong interference of this element (iron lines at 247.86 and 247.95). For this reason, in this work the carbon line at 193.03 nm was used. Using methods of statistical analysis as a simple linear regression, multivariate linear regression and cross-validation were possible to obtain correlation coefficients higher than 0.91. These results show the great potential of using portable LIBS systems for quantitative carbon measurements in tropical soils. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Objective. - The aim of this study was to propose a new method that allows for the estimation of critical power (CP) from non-exhaustive tests using ratings of perceived exertion (RPE). Methods. - Twenty-two subjects underwent two practice trials for ergometer and Borg 15-point scale familiarization, and adaptation to severe exhaustive exercise. After then, four exercise bouts were performed on different days for the estimation of CP and anaerobic work capacity (AWC) by linear work-time equation, and CP(15), CP(17), AWC(15) and AWC(17) were estimated using the work and time to attainment of RPE15 and RPE17 based on the Borg 15-point scale. Results. - The CP, CP(15) and CP(17) (170-177W) were not significantly different (P>0.05). However, AWC, AWC(15) and AWC(17) were all different from each other. The correlations between CP(15) and CP(17), with CP were strong (R=0.871 and 0.911, respectively), but the AWC(15) and AWC(17) were not significantly correlated with AWC. Conclusion. - Sub-maximal. RPE responses can be used for the estimation of CP from non-exhaustive exercise protocols. (C) 2009 Elsevier Masson SAS. All rights reserved.
Resumo:
This letter shows that the matrix can be used for redundancy and observability analysis of metering systems composed of PMU measurements and conventional measurements (power and voltage magnitude measurements). The matrix is obtained via triangular factorization of the Jacobian matrix. Observability analysis and restoration is carried out during the triangular factorization of the Jacobian matrix, and the redundancy analysis is made exploring the matrix structure. As a consequence, the matrix can be used for metering system planning considering conventional and PMU measurements. These features of the matrix will be outlined and illustrated by numerical examples.
Resumo:
This letter presents some notes on the use of the Gram matrix in observability analysis. This matrix is constructed considering the rows of the measurement Jacobian matrix as vectors, and it can be employed in observability analysis and restoration methods. The determination of nonredundant pseudo-measurements (normally injections pseudo-measurements) for merging observable islands into an observable (single) system is carried out analyzing the pivots of the Gram matrix. The Gram matrix can also be used to verify local redundancy, which is important in measurement system planning. Some numerical examples` are used to illustrate these features. Others features of the Gram matrix are under study.
Resumo:
The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.