994 resultados para calibration method
Resumo:
The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^
Resumo:
The relationship between phytoplankton assemblages and the associated optical properties of the water body is important for the further development of algorithms for large-scale remote sensing of phytoplankton biomass and the identification of phytoplankton functional types (PFTs), which are often representative for different biogeochemical export scenarios. Optical in-situ measurements aid in the identification of phytoplankton groups with differing pigment compositions and are widely used to validate remote sensing data. In this study we present results from an interdisciplinary cruise aboard the RV Polarstern along a north-to-south transect in the eastern Atlantic Ocean in November 2008. Phytoplankton community composition was identified using a broad set of in-situ measurements. Water samples from the surface and the depth of maximum chlorophyll concentration were analyzed by high performance liquid chromatography (HPLC), flow cytometry, spectrophotometry and microscopy. Simultaneously, the above- and underwater light field was measured by a set of high spectral resolution (hyperspectral) radiometers. An unsupervised cluster algorithm applied to the measured parameters allowed us to define bio-optical provinces, which we compared to ecological provinces proposed elsewhere in the literature. As could be expected, picophytoplankton was responsible for most of the variability of PFTs in the eastern Atlantic Ocean. Our bio-optical clusters agreed well with established provinces and thus can be used to classify areas of similar biogeography. This method has the potential to become an automated approach where satellite data could be used to identify shifting boundaries of established ecological provinces or to track exceptions from the rule to improve our understanding of the biogeochemical cycles in the ocean.
Resumo:
The evapotranspiration (ETc) of sprinkler-irrigated rice was determined for the semiarid conditions of NE Spain during 2001, 2002 and 2003. The surface renewal method, after calibration against the eddy covariance method, was used to obtain values of sensible heat flux (H) from high-frequency temperature readings. Latent heat flux values were obtained by solving the energy balance equation. Finally, lysimeter measurements were used to validate the evapotranspiration values obtained with the surface renewal method. Seasonal rice evapotranspiration was about 750–800 mm. Average daily ETc for mid-season (from 90 to 130 days after sowing) was 5.1, 4.5 and 6.1 mm day−1 for 2001, 2002 and 2003, respectively. The experimental weekly crop coefficients fluctuated in the range of 0.83–1.20 for 2001, 0.81–1.03 for 2002 and 0.84–1.15 for 2003. The total growing season was about 150–160 days. In average, the crop coefficients for the initial (Kcini), mid-season (Kcmid) and late-season stages (Kcend) were 0.92, 1.06 and 1.03, respectively, the length of these stages being about 55, 45 and 25 days, respectively.
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
This study analyses the differences between two calculation models for guardrails on building sites that use wooden boards and tubular steel posts. Wood was considered an isotropic material in one model and an orthotropic material in a second model. The elastic constants of the wood were obtained with ultrasound. Frequencies and vibration modes were obtained for both models through linear analysis using the finite element method. The two models were experimentally calibrated through operational modal analysis. The results obtained show that for the three types of wood under analysis, the model which considered them as an orthotropic material fitted the experimental results better than the model which considered them as an isotropic material.
Resumo:
The Actively Heated Fiber Optic (AHFO) method is shown to be capable of measuring soil water content several times per hour at 0.25 m spacing along cables of multiple kilometers in length. AHFO is based on distributed temperature sensing (DTS) observation of the heating and cooling of a buried fiber-optic cable resulting from an electrical impulse of energy delivered from the steel cable jacket. The results presented were collected from 750 m of cable buried in three 240 m colocated transects at 30, 60, and 90 cm depths in an agricultural field under center pivot irrigation. The calibration curve relating soil water content to the thermal response of the soil to a heat pulse of 10 W m−1 for 1 min duration was developed in the lab. This calibration was found applicable to the 30 and 60 cm depth cables, while the 90 cm depth cable illustrated the challenges presented by soil heterogeneity for this technique. This method was used to map with high resolution the variability of soil water content and fluxes induced by the nonuniformity of water application at the surface.
Resumo:
The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.
Resumo:
Chlorarachniophytes are amoeboid algae with chlorophyll a and b containing plastids that are surrounded by four membranes instead of two as in plants and green algae. These extra membranes form important support for the hypothesis that chlorarachniophytes have acquired their plastids by the ingestion of another eukaryotic plastid-containing alga. Chlorarachniophytes also contain a small nucleus-like structure called the nucleomorph situated between the two inner and the two outer membranes surrounding the plastid. This nucleomorph is a remnant of the endosymbiont's nucleus and encodes, among other molecules, small subunit ribosomal RNA. Previous phylogenetic analyses on the basis of this molecule provided unexpected and contradictory evidence for the origin of the chlorarachniophyte endosymbiont. We developed a new method for measuring the substitution rates of the individual nucleotides of small subunit ribosomal RNA. From the resulting substitution rate distribution, we derived an equation that gives a more realistic relationship between sequence dissimilarity and evolutionary distance than equations previously available. Phylogenetic trees constructed on the basis of evolutionary distances computed by this new method clearly situate the chlorarachniophyte nucleomorphs among the green algae. Moreover, this relationship is confirmed by transversion analysis of the Chlorarachnion plastid small subunit ribosomal RNA.
Resumo:
Nighttime satellite imagery from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS) has a unique capability to observe nocturnal light emissions from sources including cities, wild fires, and gas flares. Data from the DMSP OLS is used in a wide range of studies including mapping urban areas, estimating informal economies, and estimating urban populations. Given the extensive and increasing list of applications a repeatable method for assessing geolocation accuracy, performing inter-calibration, and defining the minimum detectable brightness would be beneficial. An array of portable lights was designed and taken to multiple field sites known to have no other light sources. The lights were operated during nighttime overpasses by the DMSP OLS and observed in the imagery. A first estimate of the minimum detectable brightness is presented based on the field experiments conducted. An assessment of the geolocation accuracy was performed by measuring the distance between the GPS measured location of the lights and the observed location in the imagery. A systematic shift was observed and the mean distance was measured at 2.9km. A method for in situ radiance calibration of the DMSP OLS using a ground based light source as an active target is presented. The wattage of light used by the active target strongly correlates with the signal measured by the DMSP OLS. This approach can be used to enhance our ability to make inter-temporal and inter-satellite comparisons of DMSP OLS imagery. Exploring the possibility of establishing a permanent active target for the calibration of nocturnal imaging systems is recommended. The methods used to assess the minimum detectable brightness, assess the geolocation accuracy, and build inter-calibration models lay the ground work for assessing the energy expended on light emitted into the sky at night. An estimate of the total energy consumed to light the night sky globally is presented.
Resumo:
Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Optical coherence tomography (OCT) is a non-invasive three-dimensional imaging system that is capable of producing high resolution in-vivo images. OCT is approved for use in clinical trials in Japan, USA and Europe. For OCT to be used effectively in a clinical diagnosis, a method of standardisation is required to assess the performance across different systems. This standardisation can be implemented using highly accurate and reproducible artefacts for calibration at both installation and throughout the lifetime of a system. Femtosecond lasers can write highly reproducible and highly localised micro-structured calibration artefacts within a transparent media. We report on the fabrication of high quality OCT calibration artefacts in fused silica using a femtosecond laser. The calibration artefacts were written in fused silica due to its high purity and ability to withstand high energy femtosecond pulses. An Amplitude Systemes s-Pulse Yb:YAG femtosecond laser with an operating wavelength of 1026 nm was used to inscribe three dimensional patterns within the highly optically transmissive substrate. Four unique artefacts have been designed to measure a wide variety of parameters, including the points spread function (PSF), modulation transfer function (MTF), sensitivity, distortion and resolution - key parameters which define the performance of the OCT. The calibration artefacts have been characterised using an optical microscope and tested on a swept source OCT. The results demonstrate that the femtosecond laser inscribed artefacts have the potential of quantitatively and qualitatively validating the performance of any OCT system.
Resumo:
Respiratory-volume monitoring is an indispensable part of mechanical ventilation. Here we present a new method of the respiratory-volume measurement based on a single fibre-optical long-period sensor of bending and the correlation between torso curvature and lung volume. Unlike the commonly used air-flow based measurement methods the proposed sensor is drift-free and immune to air-leaks. In the paper, we explain the working principle of sensors, a two-step calibration-test measurement procedure and present results that establish a linear correlation between the change in the local thorax curvature and the change of the lung volume. We also discuss the advantages and limitations of these sensors with respect to the current standards. © 2013 IEEE.
Resumo:
Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.