994 resultados para 3D measurement
Resumo:
The contribution of the detector dynamics to the weak measurement is analyzed. According to the usual theory [Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988)] the outcome of a weak measurement with preselection and postselection can be expressed as the real part of a complex number: the weak value. By accounting for the Hamiltonian evolution of the detector, here we find that there is a contribution proportional to the imaginary part of the weak value to the outcome of the weak measurement. This is due to the coherence of the probe being essential for the concept of complex weak value to be meaningful. As a particular example, we consider the measurement of a spin component and find that the contribution of the imaginary part of the weak value is sizable.
Resumo:
We describe the measurement of the depth of maximum, X(max), of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10(18) eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106 +/- 35-21) g/cm(2)/decade below 10(18.24) +/- (0.05) eV, and d24 +/- 3 g/cm(2)/ecade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm(2). The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.
Resumo:
The knowledge of the atomic structure of clusters composed by few atoms is a basic prerequisite to obtain insights into the mechanisms that determine their chemical and physical properties as a function of diameter, shape, surface termination, as well as to understand the mechanism of bulk formation. Due to the wide use of metal systems in our modern life, the accurate determination of the properties of 3d, 4d, and 5d metal clusters poses a huge problem for nanoscience. In this work, we report a density functional theory study of the atomic structure, binding energies, effective coordination numbers, average bond lengths, and magnetic properties of the 3d, 4d, and 5d metal (30 elements) clusters containing 13 atoms, M(13). First, a set of lowest-energy local minimum structures (as supported by vibrational analysis) were obtained by combining high-temperature first- principles molecular-dynamics simulation, structure crossover, and the selection of five well-known M(13) structures. Several new lower energy configurations were identified, e. g., Pd(13), W(13), Pt(13), etc., and previous known structures were confirmed by our calculations. Furthermore, the following trends were identified: (i) compact icosahedral-like forms at the beginning of each metal series, more opened structures such as hexagonal bilayerlike and double simple-cubic layers at the middle of each metal series, and structures with an increasing effective coordination number occur for large d states occupation. (ii) For Au(13), we found that spin-orbit coupling favors the three-dimensional (3D) structures, i.e., a 3D structure is about 0.10 eV lower in energy than the lowest energy known two-dimensional configuration. (iii) The magnetic exchange interactions play an important role for particular systems such as Fe, Cr, and Mn. (iv) The analysis of the binding energy and average bond lengths show a paraboliclike shape as a function of the occupation of the d states and hence, most of the properties can be explained by the chemistry picture of occupation of the bonding and antibonding states.
Resumo:
This paper describes a new and simple method to determine the molecular weight of proteins in dilute solution, with an error smaller than similar to 10%, by using the experimental data of a single small-angle X-ray scattering (SAXS) curve measured on a relative scale. This procedure does not require the measurement of SAXS intensity on an absolute scale and does not involve a comparison with another SAXS curve determined from a known standard protein. The proposed procedure can be applied to monodisperse systems of proteins in dilute solution, either in monomeric or multimeric state, and it has been successfully tested on SAXS data experimentally determined for proteins with known molecular weights. It is shown here that the molecular weights determined by this procedure deviate from the known values by less than 10% in each case and the average error for the test set of 21 proteins was 5.3%. Importantly, this method allows for an unambiguous determination of the multimeric state of proteins with known molecular weights.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
The effect of conversion from forest-to-pasture upon soil carbon stocks has been intensively discussed, but few studies focus on how this land-use change affects carbon (C) distribution across soil fractions in the Amazon basin. We investigated this in the 20 cm depth along a chronosequence of sites from native forest to three successively older pastures. We performed a physicochemical fractionation of bulk soil samples to better understand the mechanisms by which soil C is stabilized and evaluate the contribution of each C fraction to total soil C. Additionally, we used a two-pool model to estimate the mean residence time (MRT) for the slow and active pool C in each fraction. Soil C increased with conversion from forest-to-pasture in the particulate organic matter (> 250 mu m), microaggregate (53-250 mu m), and d-clay (< 2 mu m) fractions. The microaggregate comprised the highest soil C content after the conversion from forest-to-pasture. The C content of the d-silt fraction decreased with time since conversion to pasture. Forest-derived C remained in all fractions with the highest concentration in the finest fractions, with the largest proportion of forest-derived soil C associated with clay minerals. Results from this work indicate that microaggregate formation is sensitive to changes in management and might serve as an indicator for management-induced soil carbon changes, and the soil C changes in the fractions are dependent on soil texture.
Resumo:
Soils are an important component in the biogeochemical cycle of carbon, storing about four times more carbon than biomass plants and nearly three times more than the atmosphere. Moreover, the carbon content is directly related on the capacity of water retention, fertility. among other properties. Thus, soil carbon quantification in field conditions is an important challenge related to carbon cycle and global climatic changes. Nowadays. Laser Induced Breakdown Spectroscopy (LIBS) can be used for qualitative elemental analyses without previous treatment of samples and the results are obtained quickly. New optical technologies made possible the portable LIBS systems and now, the great expectation is the development of methods that make possible quantitative measurements with LIBS. The goal of this work is to calibrate a portable LIBS system to carry out quantitative measures of carbon in whole tropical soil sample. For this, six samples from the Brazilian Cerrado region (Argisoil) were used. Tropical soils have large amounts of iron in their compositions, so the carbon line at 247.86 nm presents strong interference of this element (iron lines at 247.86 and 247.95). For this reason, in this work the carbon line at 193.03 nm was used. Using methods of statistical analysis as a simple linear regression, multivariate linear regression and cross-validation were possible to obtain correlation coefficients higher than 0.91. These results show the great potential of using portable LIBS systems for quantitative carbon measurements in tropical soils. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The most ordinary finite element formulations for 3D frame analysis do not consider the warping of cross-sections as part of their kinematics. So the stiffness, regarding torsion, should be directly introduced by the user into the computational software and the bar is treated as it is working under no warping hypothesis. This approach does not give good results for general structural elements applied in engineering. Both displacement and stress calculation reveal sensible deficiencies for both linear and non-linear applications. For linear analysis, displacements can be corrected by assuming a stiffness that results in acceptable global displacements of the analyzed structure. However, the stress calculation will be far from reality. For nonlinear analysis the deficiencies are even worse. In the past forty years, some special structural matrix analysis and finite element formulations have been proposed in literature to include warping and the bending-torsion effects for 3D general frame analysis considering both linear and non-linear situations. In this work, using a kinematics improvement technique, the degree of freedom ""warping intensity"" is introduced following a new approach for 3D frame elements. This degree of freedom is associated with the warping basic mode, a geometric characteristic of the cross-section, It does not have a direct relation with the rate of twist rotation along the longitudinal axis, as in existent formulations. Moreover, a linear strain variation mode is provided for the geometric non-linear approach, for which complete 3D constitutive relation (Saint-Venant Kirchhoff) is adopted. The proposed technique allows the consideration of inhomogeneous cross-sections with any geometry. Various examples are shown to demonstrate the accuracy and applicability of the proposed formulation. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This study presents a solid-like finite element formulation to solve geometric non-linear three-dimensional inhomogeneous frames. To achieve the desired representation, unconstrained vectors are used instead of the classic rigid director triad; as a consequence, the resulting formulation does not use finite rotation schemes. High order curved elements with any cross section are developed using a full three-dimensional constitutive elastic relation. Warping and variable thickness strain modes are introduced to avoid locking. The warping mode is solved numerically in FEM pre-processing computational code, which is coupled to the main program. The extra calculations are relatively small when the number of finite elements. with the same cross section, increases. The warping mode is based on a 2D free torsion (Saint-Venant) problem that considers inhomogeneous material. A scheme that automatically generates shape functions and its derivatives allow the use of any degree of approximation for the developed frame element. General examples are solved to check the objectivity, path independence, locking free behavior, generality and accuracy of the proposed formulation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This study presents an alternative three-dimensional geometric non-linear frame formulation based on generalized unconstrained vector and positions to solve structures and mechanisms subjected to dynamic loading. The formulation is classified as total Lagrangian with exact kinematics description. The resulting element presents warping and non-constant transverse strain modes, which guarantees locking-free behavior for the adopted three-dimensional constitutive relation, Saint-Venant-Kirchhoff, for instance. The application of generalized vectors is an alternative to the use of finite rotations and rigid triad`s formulae. Spherical and revolute joints are considered and selected dynamic and static examples are presented to demonstrate the accuracy and generality of the proposed technique. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A way of coupling digital image correlation (to measure displacement fields) and boundary element method (to compute displacements and tractions along a crack surface) is presented herein. It allows for the identification of Young`s modulus and fracture parameters associated with a cohesive model. This procedure is illustrated to analyze the latter for an ordinary concrete in a three-point bend test on a notched beam. In view of measurement uncertainties, the results are deemed trustworthy thanks to the fact that numerous measurement points are accessible and used as entries to the identification procedure. (C) 2010 Elsevier Ltd. All rights reserved.