953 resultados para Error Analysis
Resumo:
Permafrost dynamics play an important role in high-latitude peatland carbon balance and are key to understanding the future response of soil carbon stocks. Permafrost aggradation can control the magnitude of the carbon feedback in peatlands through effects on peat properties. We compiled peatland plant macrofossil records for the northern permafrost zone (515 cores from 280 sites) and classified samples by vegetation type and environmental class (fen, bog, tundra and boreal permafrost, thawed permafrost). We examined differences in peat properties (bulk density, carbon (C), nitrogen (N) and organic matter content, C/N ratio) and C accumulation rates among vegetation types and environmental classes.
Resumo:
Two different slug test field methods are conducted in wells completed in a Puget Lowland aquifer and are examined for systematic error resulting from water column displacement techniques. Slug tests using the standard slug rod and the pneumatic method were repeated on the same wells and hydraulic conductivity estimates were calculated according to Bouwer & Rice and Hvorslev before using a non-parametric statistical test for analysis. Practical considerations of performing the tests in real life settings are also considered in the method comparison. Statistical analysis indicates that the slug rod method results in up to 90% larger hydraulic conductivity values than the pneumatic method, with at least a 95% certainty that the error is method related. This confirms the existence of a slug-rod bias in a real world scenario which has previously been demonstrated by others in synthetic aquifers. In addition to more accurate values, the pneumatic method requires less field labor, less decontamination, and provides the ability to control the magnitudes of the initial displacement, making it the superior slug test procedure.
Resumo:
Before puberty, there are only small sex differences in body shape and composition. During adolescence, sexual dimorphism in bone, lean, and fat mass increases, giving rise to the greater size and strength of the male skeleton. The question remains as to whether there are sex differences in bone strength or simply differences in anthropometric dimensions. To test this, we applied hip structural analysis (HSA) to derive strength and geometric indices of the femoral neck using bone densitometry scans (DXA) from a 6-year longitudinal study in Canadian children. Seventy boys and sixty-eight girls were assessed annually for 6 consecutive years. At the femoral neck, cross-sectional area (CSA, an index of axial strength), subperiosteal width (SPW), and section modulus (Z, an index of bending strength) were determined, and data were analyzed using a hierarchical (random effects) modeling approach. Biological age (BA) was defined as years from age at peak height velocity (PHV). When BA, stature, and total-body lean mass (TB lean) were controlled, boys had significantly higher Z than girls at all maturity levels (P < 0.05). Controlling height and TB lean for CSA demonstrated a significant independent sex by BA interaction effect (P < 0.05). That is, CSA was greater in boys before PHV but higher in girls after PHV The coefficients contributing the greatest proportion to the prediction of CSA, SPW, and Z were height and lean mass. Because the significant sex difference in Z was relatively small and close to the error of measurement, we questioned its biological significance. The sex difference in bending strength was therefore explained by anthropometric differences. In contrast to recent hypotheses, we conclude that the CSA-lean ratio does not imply altered mechanosensitivity in girls because bending dominates loading at the neck, and the Z-lean ratio remained similar between the sexes throughout adolescence. That is, despite the greater CSA in girls, the bone is strategically placed to resist bending; hence, the bones of girls and boys adapt to mechanical challenges in a similar way. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The sources of covariation among cognitive measures of Inspection Time, Choice Reaction Time, Delayed Response Speed and Accuracy, and IQ were examined in a classical twin design that included 245 monozygotic (MZ) and 298 dizygotic (DZ) twin pairs. Results indicated that a factor model comprising additive genetic and unique environmental effects was the most parsimonious. In this model, a general genetic cognitive factor emerged with factor loadings ranging from 0.28 to 0.64. Three other genetic factors explained the remaining genetic covariation between various speed and Delayed Response measures with IQ. However, a large proportion of the genetic variation in verbal (54%) and performance (25%) IQ was unrelated to these lower order cognitive measures. The independent genetic IQ variation may reflect information processes not captured by the elementary cognitive tasks, Inspection Time and Choice Reaction Time, nor our working memory task, Delayed Response. Unique environmental effects were mostly nonoverlapping, and partly represented test measurement error.
Resumo:
In vitro evolution imitates the natural evolution of genes and has been very successfully applied to the modification of coding sequences, but it has not yet been applied to promoter sequences. We propose an alternative method for functional promoter analysis by applying an in vitro evolution scheme consisting of rounds of error-prone PCR, followed by DNA shuffling and selection of mutant promoter activities. We modified the activity in embryogenic sugarcane cells of the promoter region of the Goldfinger isolate of banana streak virus and obtained mutant promoter sequences that showed an average mutation rate of 2.5% after applying one round of error-prone PCR and DNA shuffling. Selection and sequencing of promoter sequences with decreased or unaltered activity allowed us to rapidly map the position of one cis-acting element that influenced promoter activity in embryogenic sugarcane cells and to discover neutral mutations that did not affect promoter Junction. The selective-shotgun approach of this promoter analysis method immediately after the promoter boundaries have been defined by 5' deletion analysis dramatically reduces the labor associated with traditional linker-scanning deletion analysis to reveal the position of functional promoter domains. Furthermore, this method allows the entire promoter to be investigated at once, rather than selected domains or nucleotides, increasing the, prospect of identifying interacting promoter regions.
Resumo:
The aerated stirred reactor (ASR) has been widely used in biochemical and wastewater treatment processes. The information describing how the activated sludge properties and operation conditions affect the hydrodynamics and mass transfer coefficient is missing in the literature. The aim of this study was to investigate the influence of flow regime, superficial gas velocity (U-G), power consumption unit (P/V-L), sludge loading, and apparent viscosity (pap) of activated sludge fluid on the mixing time (t(m)), gas hold-up (epsilon), and volumetric mass transfer coefficient (kLa) in an activated sludge aerated stirred column reactor (ASCR). The activated sludge fluid performed a non-Newtonian rheological behavior. The sludge loading significantly affected the fluid hydrodynamics and mass transfer. With an increase in the UG and P/V-L, the epsilon and k(L)a increased, and the t(m), decreased. The E, kLa, and tm,were influenced dramatically as the flow regime changed from homogeneous to heterogeneous patterns. The proposed mathematical models predicted the experimental results well under experimental conditions, indicating that the U-G, P/V-L, and mu(ap) had significant impact on the t(m) epsilon, and k(L)a. These models were able to give the tm, F, and kLa values with an error around +/- 8%, and always less than +/- 10%. (c) 2005 Wiley Periodicals, Inc.
Resumo:
QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Resumo:
A field study was performed in a hospital pharmacy aimed at identifying positive and negative influences on the process of detection of and further recovery from initial errors or other failures, thus avoiding negative consequences. Confidential reports and follow-up interviews provided data on 31 near-miss incidents involving such recovery processes. Analysis revealed that organizational culture with regard to following procedures needed reinforcement, that some procedures could be improved, that building in extra checks was worthwhile and that supporting unplanned recovery was essential for problems not covered by procedures. Guidance is given on how performance in recovery could be measured. A case is made for supporting recovery as an addition to prevention-based safety methods.
Resumo:
The consensus from published studies is that plasma lipids are each influenced by genetic factors, and that this contributes to genetic variation in risk of cardiovascular disease. Heritability estimates for lipids and lipoproteins are in the range .48 to .87, when measured once per study participant. However, this ignores the confounding effects of biological variation measurement error and ageing, and a truer assessment of genetic effects on cardiovascular risk may be obtained from analysis of longitudinal twin or family data. We have analyzed information on plasma high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol, and triglycerides, from 415 adult twins who provided blood on two to five occasions over 10 to 17 years. Multivariate modeling of genetic and environmental contributions to variation within and across occasions was used to assess the extent to which genetic and environmental factors have long-term effects on plasma lipids. Results indicated that more than one genetic factor influenced HDL and LDL components of cholesterol, and triglycerides over time in all studies. Nonshared environmental factors did not have significant long-term effects except for HDL. We conclude that when heritability of lipid risk factors is estimated on only one occasion, the existence of biological variation and measurement errors leads to underestimation of the importance of genetic factors as a cause of variation in long-term risk within the population. In addition our data suggest that different genes may affect the risk profile at different ages.
Resumo:
Formal methods have significant benefits for developing safety critical systems, in that they allow for correctness proofs, model checking safety and liveness properties, deadlock checking, etc. However, formal methods do not scale very well and demand specialist skills, when developing real-world systems. For these reasons, development and analysis of large-scale safety critical systems will require effective integration of formal and informal methods. In this paper, we use such an integrative approach to automate Failure Modes and Effects Analysis (FMEA), a widely used system safety analysis technique, using a high-level graphical modelling notation (Behavior Trees) and model checking. We inject component failure modes into the Behavior Trees and translate the resulting Behavior Trees to SAL code. This enables us to model check if the system in the presence of these faults satisfies its safety properties, specified by temporal logic formulas. The benefit of this process is tool support that automates the tedious and error-prone aspects of FMEA.
Resumo:
Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.
Resumo:
The XSophe computer simulation software suite consisting of a daemon, the XSophe interface and the computational program Sophe is a state of the art package for the simulation of electron paramagnetic resonance spectra. The Sophe program performs the computer simulation and includes a number of new technologies including; the SOPHE partition and interpolation schemes, a field segmentation algorithm, homotopy, parallelisation and spectral optimisation. The SOPHE partition and interpolation scheme along with a field segmentation algorithm greatly increases the speed of simulations for most systems. Multidimensional homotopy provides an efficient method for accurately tracing energy levels and hence tracing transitions in the presence of energy level anticrossings and looping transitions and allowing computer simulations in frequency space. Recent enhancements to Sophe include the generalised treatment of distributions of orientational parameters, termed the mosaic misorientation linewidth model and a faster more efficient algorithm for the calculation of resonant field positions and transition probabilities. For complex systems the parallelisation enables the simulation of these systems on a parallel computer and the optimisation algorithms in the suite provide the experimentalist with the possibility of finding the spin Hamiltonian parameters in a systematic manner rather than a trial-and-error process. The XSophe software suite has been used to simulate multifrequency EPR spectra (200 MHz to 6 00 GHz) from isolated spin systems (S > ~½) and coupled centres (Si, Sj _> I/2). Griffin, M.; Muys, A.; Noble, C.; Wang, D.; Eldershaw, C.; Gates, K.E.; Burrage, K.; Hanson, G.R."XSophe, a Computer Simulation Software Suite for the Analysis of Electron Paramagnetic Resonance Spectra", 1999, Mol. Phys. Rep., 26, 60-84.
Resumo:
Software simulation models are computer programs that need to be verified and debugged like any other software. In previous work, a method for error isolation in simulation models has been proposed. The method relies on a set of feature matrices that can be used to determine which part of the model implementation is responsible for deviations in the output of the model. Currrently these feature matrices have to be generated by hand from the model implementation, which is a tedious and error-prone task. In this paper, a method based on mutation analysis, as well as prototype tool support for the verification of the manually generated feature matrices is presented. The application of the method and tool to a model for wastewater treatment shows that the feature matrices can be verified effectively using a minimal number of mutants.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.