939 resultados para Omission Error
Resumo:
The truncation errors associated with finite difference solutions of the advection-dispersion equation with first-order reaction are formulated from a Taylor analysis. The error expressions are based on a general form of the corresponding difference equation and a temporally and spatially weighted parametric approach is used for differentiating among the various finite difference schemes. The numerical truncation errors are defined using Peclet and Courant numbers and a new Sink/Source dimensionless number. It is shown that all of the finite difference schemes suffer from truncation errors. Tn particular it is shown that the Crank-Nicolson approximation scheme does not have second order accuracy for this case. The effects of these truncation errors on the solution of an advection-dispersion equation with a first order reaction term are demonstrated by comparison with an analytical solution. The results show that these errors are not negligible and that correcting the finite difference scheme for them results in a more accurate solution. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The extension of Adachi's model with a Gaussian-like broadening function, in place of Lorentzian, is used to model the optical dielectric function of the alloy AlxGa1-xAs. Gaussian-like broadening is accomplished by replacing the damping constant in the Lorentzian line shape with a frequency dependent expression. In this way, the comparative simplicity of the analytic formulas of the model is preserved, while the accuracy becomes comparable to that of more intricate models, and/or models with significantly more parameters. The employed model accurately describes the optical dielectric function in the spectral range from 1.5 to 6.0 eV within the entire alloy composition range. The relative rms error obtained for the refractive index is below 2.2% for all compositions. (C) 1999 American Institute of Physics. [S0021-8979(99)00512-5].
Resumo:
We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.
Resumo:
This is the first paper in a study on the influence of the environment on the crack tip strain field for AISI 4340. A stressing stage for the environmental scanning electron microscope (ESEM) was constructed which was capable of applying loads up to 60 kN to fracture-mechanics samples. The measurement of the crack tip strain field required preparation (by electron lithography or chemical etching) of a system of reference points spaced at similar to 5 mu m intervals on the sample surface, loading the sample inside an electron microscope, image processing procedures to measure the displacement at each reference point and calculation of the strain field. Two algorithms to calculate strain were evaluated. Possible sources of errors were calculation errors due to the algorithm, errors inherent in the image processing procedure and errors due to the limited precision of the displacement measurements. Estimation of the contribution of each source of error was performed. The technique allows measurement of the crack tip strain field over an area of 50 x 40 mu m with a strain precision better than +/- 0.02 at distances larger than 5 mu m from the crack tip. (C) 1999 Kluwer Academic Publishers.
Resumo:
Community awareness of the sustainable use of land, water and vegetation resources is increasing. The sustainable use of these resources is pivotal to sustainable farming systems. However, techniques for monitoring the sustainable management of these resources are poorly understood and untested. We propose a framework to benchmark and monitor resources in the grains industry. Eight steps are listed below to achieve these objectives: (i) define industry issues; (ii) identify the issues through growers, stakeholder and community consultation; (iii) identify indicators (measurable attributes, properties or characteristics) of sustainability through consultation with growers, stakeholders, experts and community members, relating to: crop productivity; resource maintenance/enhancement; biodiversity; economic viability; community viability; and institutional structure; (iv) develop and use selection criteria to select indicators that consider: responsiveness to change; ease of capture; community acceptance and involvement; interpretation; measurement error; stability, frequency and cost of measurement; spatial scale issues; and mapping capability in space and through time. The appropriateness of indicators can be evaluated using a decision making system such as a multiobjective decision support system (MO-DSS, a method to assist in decision making from multiple and conflicting objectives); (v) involve stakeholders and the community in the definition of goals and setting benchmarking and monitoring targets for sustainable farming; (vi) take preventive and corrective/remedial action; (vii) evaluate effectiveness of actions taken; and (viii) revise indicators as part of a continual improvement principle designed to achieve best management practice for sustainable farming systems. The major recommendations are to: (i) implement the framework for resources (land, water and vegetation, economic, community and institution) benchmarking and monitoring, and integrate this process with current activities so that awareness, implementation and evolution of sustainable resource management practices become normal practice in the grains industry; (ii) empower the grains industry to take the lead by using relevant sustainability indicators to benchmark and monitor resources; (iii) adopt a collaborative approach by involving various industry, community, catchment management and government agency groups to minimise implementation time. Monitoring programs such as Waterwatch, Soilcheck, Grasscheck and Topcrop should be utilised; (iv) encourage the adoption of a decision making system by growers and industry representatives as a participatory decision and evaluation process. Widespread use of sustainability indicators would assist in validating and refining these indicators and evaluating sustainable farming systems. The indicators could also assist in evaluating best management practices for the grains industry.
Resumo:
Rates of cell size increase are an important measure of success during the baculovirus infection process. Batch and fed batch cultures sustain large fluctuations in osmolarity that can affect the measured cell volume if this parameter is not considered during the sizing protocol. Where osmolarity differences between the sizing diluent and the culture broth exist, biased measurements of size are obtained as a result of the cell osmometer response. Spodoptera frugiperda (Sf9) cells are highly sensitive to volume change when subjected to a change in osmolarity. Use of the modified protocol with culture supernatants for sample dilution prior to sizing removed the observed error during measurement.
Resumo:
We analyze the fidelity of teleportation protocols, as a function of resource entanglement, for three kinds of two-mode oscillator states: states with fixed total photon number, number states entangled at a beam splitter, and the two-mode squeezed vacuum state. We define corresponding teleportation protocols for each case including phase noise to model degraded entanglement of each resource.
Resumo:
Ischaemic preconditioning in rats was studied using MRI. Ischaemic preconditioning was induced, using an intraluminal filament method, by 30 min middle cerebral artery occlusion (MCAO), and imaged 24 h later. The secondary insult of 100 min MCAO was induced 3 days following preconditioning and imaged 24 and 72 h later. Twenty four hours following ischaemic preconditioning most rats showed small sub-cortical hyperintense regions not seen in sham-preconditioned rats. Twenty-four hours and 72 h following the secondary insult preconditioned animals showed significantly smaller lesions (24 h = 112 +/- 31 mm(3), mean +/- standard error; 72 h = 80 +/- 35 mm(3)) which were confined to the striatum, than controls (24 h = 234 +/- 32 mm(3), p = 0.026; 72 h = 275 +/- 37 mm(3), p = 0.003). In addition during Lesion maturation from 24 to 72 h post-secondary MCAO, preconditioned rats displayed an average reduction in lesion size as measured by MRI whereas sham-preconditioned rats displayed increases in lesion size; this is the first report of such differential lesion volume evolution in cerebral ischaemic preconditioning. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
We discuss quantum error correction for errors that occur at random times as described by, a conditional Poisson process. We shoo, how a class of such errors, detected spontaneous emission, can be corrected by continuous closed loop, feedback.