962 resultados para Measurement errors


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Weighing lysimeters are the standard method for directly measuring evapotranspiration (ET). This paper discusses the construction, installation, and performance of two (1.52 m × 1.52 m × 2.13-m deep) repacked weighing lysimeters for measuring ET of corn and soybean in West Central Nebraska. The cost of constructing and installing each lysimeter was approximately US $12,500, which could vary depending on the availability and cost of equipment and labor. The resolution of the lysimeters was 0.0001 mV V-1, which was limited by the data processing and storage resolution of the datalogger. This resolution was equivalent to 0.064 and 0.078 mm of ET for the north and south lysimeters, respectively. Since the percent measurement error decreases with the magnitude of the ET measured, this resolution is adequate for measuring ET for daily and longer periods, but not for shorter time steps. This resolution would result in measurement errors of less than 5% for measuring ET values of ≥3 mm, but the percent error rapidly increases for lower ET values. The resolution of the lysimeters could potentially be improved by choosing a datalogger that could process and store data with a higher resolution than the one used in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new finite element method is developed to analyse non-conservative structures with more than one parameter behaving in a stochastic manner. As a generalization, this paper treats the subsequent non-self-adjoint random eigenvalue problem that arises when the material property values of the non-conservative structural system have stochastic fluctuations resulting from manufacturing and measurement errors. The free vibration problems of stochastic Beck's column and stochastic Leipholz column whose Young's modulus and mass density are distributed stochastically are considered. The stochastic finite element method that is developed, is implemented to arrive at a random non-self-adjoint algebraic eigenvalue problem. The stochastic characteristics of eigensolutions are derived in terms of the stochastic material property variations. Numerical examples are given. It is demonstrated that, through this formulation, the finite element discretization need not be dependent on the characteristics of stochastic processes of the fluctuations in material property value.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Galaxies evolve throughout the history of the universe from the first star-forming sources, through gas-rich asymmetric structures with rapid star formation rates, to the massive symmetrical stellar systems observed at the present day. Determining the physical processes which drive galaxy formation and evolution is one of the most important questions in observational astrophysics. This thesis presents four projects aimed at improving our understanding of galaxy evolution from detailed measurements of star forming galaxies at high redshift.

We use resolved spectroscopy of gravitationally lensed z ≃ 2 - 3 star forming galaxies to measure their kinematic and star formation properties. The combination of lensing with adaptive optics yields physical resolution of ≃ 100 pc, sufficient to resolve giant Hii regions. We find that ~ 70 % of galaxies in our sample display ordered rotation with high local velocity dispersion indicating turbulent thick disks. The rotating galaxies are gravitationally unstable and are expected to fragment into giant clumps. The size and dynamical mass of giant Hii regions are in agreement with predictions for such clumps indicating that gravitational instability drives the rapid star formation. The remainder of our sample is comprised of ongoing major mergers. Merging galaxies display similar star formation rate, morphology, and local velocity dispersion as isolated sources, but their velocity fields are more chaotic with no coherent rotation.

We measure resolved metallicity in four lensed galaxies at z = 2.0 − 2.4 from optical emission line diagnostics. Three rotating galaxies display radial gradients with higher metallicity at smaller radii, while the fourth is undergoing a merger and has an inverted gradient with lower metallicity at the center. Strong gradients in the rotating galaxies indicate that they are growing inside-out with star formation fueled by accretion of metal-poor gas at large radii. By comparing measured gradients with an appropriate comparison sample at z = 0, we demonstrate that metallicity gradients in isolated galaxies must flatten at later times. The amount of size growth inferred by the gradients is in rough agreement with direct measurements of massive galaxies. We develop a chemical evolution model to interpret these data and conclude that metallicity gradients are established by a gradient in the outflow mass loading factor, combined with radial inflow of metal-enriched gas.

We present the first rest-frame optical spectroscopic survey of a large sample of low-luminosity galaxies at high redshift (L < L*, 1.5 < z < 3.5). This population dominates the star formation density of the universe at high redshifts, yet such galaxies are normally too faint to be studied spectroscopically. We take advantage of strong gravitational lensing magnification to compile observations for a sample of 29 galaxies using modest integration times with the Keck and Palomar telescopes. Balmer emission lines confirm that the sample has a median SFR ∼ 10 M_sun yr^−1 and extends to lower SFR than has been probed by other surveys at similar redshift. We derive the metallicity, dust extinction, SFR, ionization parameter, and dynamical mass from the spectroscopic data, providing the first accurate characterization of the star-forming environment in low-luminosity galaxies at high redshift. For the first time, we directly test the proposal that the relation between galaxy stellar mass, star formation rate, and gas phase metallicity does not evolve. We find lower gas phase metallicity in the high redshift galaxies than in local sources with equivalent stellar mass and star formation rate, arguing against a time-invariant relation. While our result is preliminary and may be biased by measurement errors, this represents an important first measurement that will be further constrained by ongoing analysis of the full data set and by future observations.

We present a study of composite rest-frame ultraviolet spectra of Lyman break galaxies at z = 4 and discuss implications for the distribution of neutral outflowing gas in the circumgalactic medium. In general we find similar spectroscopic trends to those found at z = 3 by earlier surveys. In particular, absorption lines which trace neutral gas are weaker in less evolved galaxies with lower stellar masses, smaller radii, lower luminosity, less dust, and stronger Lyα emission. Typical galaxies are thus expected to have stronger Lyα emission and weaker low-ionization absorption at earlier times, and we indeed find somewhat weaker low-ionization absorption at higher redshifts. In conjunction with earlier results, we argue that the reduced low-ionization absorption is likely caused by lower covering fraction and/or velocity range of outflowing neutral gas at earlier epochs. This result has important implications for the hypothesis that early galaxies were responsible for cosmic reionization. We additionally show that fine structure emission lines are sensitive to the spatial extent of neutral gas, and demonstrate that neutral gas is concentrated at smaller galactocentric radii in higher redshift galaxies.

The results of this thesis present a coherent picture of galaxy evolution at high redshifts 2 ≲ z ≲ 4. Roughly 1/3 of massive star forming galaxies at this period are undergoing major mergers, while the rest are growing inside-out with star formation occurring in gravitationally unstable thick disks. Star formation, stellar mass, and metallicity are limited by outflows which create a circumgalactic medium of metal-enriched material. We conclude by describing some remaining open questions and prospects for improving our understanding of galaxy evolution with future observations of gravitationally lensed galaxies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

提出了一种利用扫描型哈特曼检测装置检验靶镜光学质量的技术.该装置对传统哈特曼检验装置的光阑进行了改进,通过扫描型哈特曼光阑的旋转扫描,可对被检靶镜全口径范围内连续采样.利用该扫描型哈特曼检测装置对一块口径为φ270 mm的非球面靶镜的能量集中度和波像差进行了检验,其结果与激光数字波面干涉仪的测量结果相吻合,其中能量集中度的相对测量误差为7.7%,波像差的相对测量误差为10.2%,验证了该检测技术的有效性.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

晶体折射率的准确测定是晶体上薄膜器件设计的基础。介绍了利用分光光度计测量晶体折射率的方法,通过背面影响系数法、背面镀增透膜和将两者结合起来的方法消除晶体反射率测量时背面反射带来的影响,给出了具体的步骤并对测量误差进行了分析。由于晶体的光学各向异性,采用起偏器扫描的方法测量晶体光学性质随方向的变化。通过对LiB3P5晶体的折射率的测量,证实了该方法的可行性并可用于其他光学晶体折射率的测量。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe the application of two types of stereo camera systems in fisheries research, including the design, calibration, analysis techniques, and precision of the data obtained with these systems. The first is a stereo video system deployed by using a quick-responding winch with a live feed to provide species- and size- composition data adequate to produce acoustically based biomass estimates of rockfish. This system was tested on the eastern Bering Sea slope where rockfish were measured. Rockfish sizes were similar to those sampled with a bottom trawl and the relative error in multiple measurements of the same rockfish in multiple still-frame images was small. Measurement errors of up to 5.5% were found on a calibration target of known size. The second system consisted of a pair of still-image digital cameras mounted inside a midwater trawl. Processing of the stereo images allowed fish length, fish orientation in relation to the camera platform, and relative distance of the fish to the trawl netting to be determined. The video system was useful for surveying fish in Alaska, but it could also be used broadly in other situations where it is difficult to obtain species-composition or size-composition information. Likewise, the still-image system could be used for fisheries research to obtain data on size, position, and orientation of fish.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During the VITAL cruise in the Bay of Biscay in summer 2002, two devices for measuring the length of swimming fish were tested: 1) a mechanical crown that emitted a pair of parallel laser beams and that was mounted on the main camera and 2) an underwater auto-focus video camera. The precision and accuracy of these devices were compared and the various sources of measurement errors were estimated by repeatedly measuring fixed and mobile objects and live fish. It was found that fish mobility is the main source of error for these devices because they require that the objects to be measured are perpendicular to the field of vision. The best performance was obtained with the laser method where a video-replay of laser spots (projected on fish bodies) carrying real-time size information was used. The auto-focus system performed poorly because of a delay in obtaining focus and because of some technical problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In previous papers (S. Adhikari and J. Woodhouse 2001 Journal of Sound and Vibration 243, 43-61; 63-88; S. Adhikari and J. Woodhouse 2002 Journal of Sound and Vibration 251, 477-490) methods were proposed to obtain the coefficient matrix for a viscous damping model or a non-viscous damping model with an exponential relaxation function, from measured complex natural frequencies and modes. In all these works, it has been assumed that exact complex natural frequencies and complex modes are known. In reality, this will not be the case. The purpose of this paper is to analyze the sensitivity of the identified damping matrices to measurement errors. By using numerical and analytical studies it is shown that the proposed methods can indeed be expected to give useful results from moderately noisy data provided a correct damping model is selected for fitting. Indications are also given of what level of noise in the measured modal properties is needed to mask the true physical behaviour.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a Bayesian probabilistic framework to assess soil properties and model uncertainty to better predict excavation-induced deformations using field deformation data. The potential correlations between deformations at different depths are accounted for in the likelihood function needed in the Bayesian approach. The proposed approach also accounts for inclinometer measurement errors. The posterior statistics of the unknown soil properties and the model parameters are computed using the Delayed Rejection (DR) method and the Adaptive Metropolis (AM) method. As an application, the proposed framework is used to assess the unknown soil properties of multiple soil layers using deformation data at different locations and for incremental excavation stages. The developed approach can be used for the design of optimal revisions for supported excavation systems. © 2010 ASCE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In geotechnical engineering, soil classification is an essential component in the design process. Field methods such as the cone penetration test (CPT) can be used as less expensive and faster alternatives to sample retrieval and testing. Unfortunately, current soil classification charts based on CPT data and laboratory measurements are too generic, and may not provide an accurate prediction of the soil type. A probabilistic approach is proposed here to update and modify soil identification charts based on site-specific CPT data. The probability that a soil is correctly classified is also estimated. The updated identification chart can be used for a more accurate prediction of the classification of the soil, and can account for prior information available before conducting the tests, site-specific data, and measurement errors. As an illustration, the proposed approach is implemented using CPT data from the Treporti Test Site (TTS) near Venice (Italy) and the National Geotechnical Experimentation Sites (NGES) at Texas A&M University. The applicability of the site-specific chart for other sites in Venice Lagoon is assessed using data from the Malamocco test site, approximately 20 km from TTS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°-171°: estimation errors were 0:19 ± 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°-51°, where no other bony structures obstruct the projection of the femur, measurement errors were -0:07 ± 0:79 mm. © 2013 SPIE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The standard design process for the Siemens Industrial Turbomachinery, Lincoln, Dry Low Emissions combustion systems has adopted the Eddy Dissipation Model with Finite Rate Chemistry for reacting computational fluid dynamics simulations. The major drawbacks of this model have been the over-prediction of temperature and lack of species data limiting the applicability of the model. A novel combustion model referred to as the Scalar Dissipation Rate Model has been developed recently based on a flamelet type assumption. Previous attempts to adopt the flamelet philosophy with alternative closure models have failed, with the prediction of unphysical phenomenon. The Scalar Dissipation Rate Model (SDRM) was developed from a physical understanding of scalar dissipation rate, signifying the rate of mixing of hot and cold fluids at scales relevant to sustain combustion, in flames and was validated using direct numerical simulations data and experimental measurements. This paper reports on the first industrial application of the SDRM to SITL DLE combustion system. Previous applications have considered ideally premixed laboratory scale flames. The industrial application differs significantly in the complexity of the geometry, unmixedness and operating pressures. The model was implemented into ANSYS-CFX using their inbuilt command language. Simulations were run transiently using Scale Adaptive Simulation turbulence model, which switches between Large Eddy Simulation and Unsteady Reynolds Averaged Navier Stokes using a blending function. The model was validated in a research SITL DLE combustion system prior to being applied to the actual industrial geometry at real operating conditions. This system consists of the SGT-100 burner with a glass square-sectioned combustor allowing for detailed diagnostics. This paper shows the successful validation of the SDRM against time averaged temperature and velocity within measurement errors. The successful validation allowed application of the SDRM to the SGT-100 twin shaft at the relevant full load conditions. Limited validation data was available due to the complexity of measurement in the real geometry. Comparison of surface temperatures and combustor exit temperature profiles showed an improvement compared to EDM/FRC model. Furthermore, no unphysical phenomena were predicted. This paper presents the successful application of the SDRM to the industrial combustion system. The model shows a marked improvement in the prediction of temperature over the EDM/FRC model previously used. This is of significant importance in the future applications of combustion CFD for understanding of hardware mechanical integrity, combustion emissions and dynamics of the flame. Copyright © 2012 by ASME.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A computer program, QtUCP, has been developed based on several well-established algorithms using GCC 4.0 and Qt (R) 4.0 (Open Source Edition) under Debian GNU/Linux 4.0r0. it can determine the unit-cell parameters from an electron diffraction tilt series obtained from both double-tilt and rotation-tilt holders. In this approach, two or more primitive cells of the reciprocal lattice are determined from experimental data, in the meantime, the measurement errors of the tilt angles are checked and minimized. Subsequently, the derived primitive cells are converted into the reduced form and then transformed into the reduced direct primitive cell. Finally all the patterns are indexed and the least-squares refinement is employed to obtain the optimized results of the lattice parameters. Finally, two examples are given to show the application of the program, one is based on the experiment, the other is from the simulation. (C) 2008 Elsevier B.V. All rights reserved.