921 resultados para approximation error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most previous investigations on tide-induced watertable fluctuations in coastal aquifers have been based on one-dimensional models that describe the processes in the cross-shore direction alone, assuming negligible along-shore variability. A recent study proposed a two-dimensional approximation for tide-induced watertable fluctuations that took into account coastline variations. Here, we further develop this approximation in two ways, by extending the approximation to second order and by taking into account capillary effects. Our results demonstrate that both effects can markedly influence watertable fluctuations. In particular, with the first-order approximation, the local damping rate of the tidal signal could be subject to sizable errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We theoretically study the Hilbert space structure of two neighboring P-donor electrons in silicon-based quantum computer architectures. To use electron spins as qubits, a crucial condition is the isolation of the electron spins from their environment, including the electronic orbital degrees of freedom. We provide detailed electronic structure calculations of both the single donor electron wave function and the two-electron pair wave function. We adopted a molecular orbital method for the two-electron problem, forming a basis with the calculated single donor electron orbitals. Our two-electron basis contains many singlet and triplet orbital excited states, in addition to the two simple ground state singlet and triplet orbitals usually used in the Heitler-London approximation to describe the two-electron donor pair wave function. We determined the excitation spectrum of the two-donor system, and study its dependence on strain, lattice position, and interdonor separation. This allows us to determine how isolated the ground state singlet and triplet orbitals are from the rest of the excited state Hilbert space. In addition to calculating the energy spectrum, we are also able to evaluate the exchange coupling between the two donor electrons, and the double occupancy probability that both electrons will reside on the same P donor. These two quantities are very important for logical operations in solid-state quantum computing devices, as a large exchange coupling achieves faster gating times, while the magnitude of the double occupancy probability can affect the error rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Biochemical analysis of fluid is the primary laboratory approach hi pleural effusion diagnosis. Standardization of the steps between collection and laboratorial analyses are fundamental to maintain the quality of the results. We evaluated the influence of temperature and storage time on sample stability. Methods: Pleural fluid from 30 patients was submitted to analyses of proteins, albumin, lactic dehydrogenase (LDH), cholesterol, triglycerides, and glucose. Aliquots were stored at 21 degrees, 4 degrees, and-20 degrees C, and concentrations were determined after 1, 2, 3, 4, 7, and 14 days. LDH isoenzymes were quantified in 7 random samples. Results: Due to the instability of isoenzymes 4 and 5, a decrease in LDH was observed in the first 24 h in samples maintained at -20 degrees C and after 2 days when maintained at 4 degrees C. Aside from glucose, all parameters were stable for up to at least day 4 when stored at room temperature or 4 degrees C. Conclusions: Temperature and storage time are potential preanalytical errors in pleural fluid analyses, mainly if we consider the instability of glucose and LDH. The ideal procedure is to execute all the tests immediately after collection. However, most of the tests can be done in refrigerated sample;, excepting LDH analysis. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Loss networks have long been used to model various types of telecommunication network, including circuit-switched networks. Such networks often use admission controls, such as trunk reservation, to optimize revenue or stabilize the behaviour of the network. Unfortunately, an exact analysis of such networks is not usually possible, and reduced-load approximations such as the Erlang Fixed Point (EFP) approximation have been widely used. The performance of these approximations is typically very good for networks without controls, under several regimes. There is evidence, however, that in networks with controls, these approximations will in general perform less well. We propose an extension to the EFP approximation that gives marked improvement for a simple ring-shaped network with trunk reservation. It is based on the idea of considering pairs of links together, thus making greater allowance for dependencies between neighbouring links than does the EFP approximation, which only considers links in isolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The numerical implementation of the complex image approach for the Green's function of a mixed-potential integralequation formulation is examined and is found to be limited to low values of k(0) rho (in this context k(0) rho = 2 pirho/ lambda(0), where rho is the distance between the source and the field points of the Green's function and lambda(0) is the free space wavelength). This is a clear limitation for problems of large dimension or high frequency where this limit is easily exceeded. This paper examines the various strategies and proposes a hybrid method whereby most of the above problems can be avoided. An efficient integral method that is valid for large k(0) rho is combined with the complex image method in order to take advantage of the relative merits of both schemes. It is found that a wide overlapping region exists between the two techniques allowing a very efficient and consistent approach for accurately calculating the Green's functions. In this paper, the method developed for the computation of the Green's function is used for planar structures containing both lossless and lossy media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix spalling or crushing is one of the important mechanisms of fiber-matrix interaction of fiber reinforced cementitious composites (FRCC). The fiber pullout mechanisms have been extensively studied for an aligned fiber but matrix failure is rarely investigated since it is thought not to be a major affect. However, for an inclined fiber, the matrix failure should not be neglected. Due to the complex process of matrix spalling, experimental investigation and analytical study of this mechanism are rarely found in literature. In this paper, it is assumed that the load transfer is concentrated within the short length of the inclined fiber from the exit point towards anchored end and follows the exponential law. The Mindlin formulation is employed to calculate the 3D stress field. The simulation gives much information about this field. The 3D approximation of the stress state around an inclined fiber helps to qualitatively understand the mechanism of matrix failure. Finally, a spalling criterion is proposed by which matrix spalling occurs only when the stress in a certain volume, rather than the stress at a small point, exceeds the material strength. This implies some local stress redistribution after first yield. The stress redistribution results in more energy input and higher load bearing capacity of the matrix. In accordance with this hypothesis, the evolution of matrix spalling is demonstrated. The accurate prediction of matrix spalling needs the careful determination of the parameters in this model. This is the work of further study. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.