63 resultados para Error analysis (Mathematics)
em University of Queensland eSpace - Australia
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Bias, precision and heritability of self-reported and clinically measured height in Australian twins
Resumo:
Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.
Resumo:
An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This letter presents an analytical model for evaluating the Bit Error Rate (BER) of a Direct Sequence Code Division Multiple Access (DS-CDMA) system, with M-ary orthogonal modulation and noncoherent detection, employing an array antenna operating in a Nakagami fading environment. An expression of the Signal to Interference plus Noise Ratio (SINR) at the output of the receiver is derived, which allows the BER to be evaluated using a closed form expression. The analytical model is validated by comparing the obtained results with simulation results.
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.
Resumo:
This study examined the relationship between isokinetic hip extensor/hip flexor strength, 1-RM squat strength, and sprint running performance for both a sprint-trained and non-sprint-trained group. Eleven male sprinters and 8 male controls volunteered for the study. On the same day subjects ran 20-m sprints from both a stationary start and with a 50-m acceleration distance, completed isokinetic hip extension/flexion exercises at 1.05, 4.74, and 8.42 rad.s(-1), and had their squat strength estimated. Stepwise multiple regression analysis showed that equations for predicting both 20-m maximum velocity nm time and 20-m acceleration time may be calculated with an error of less than 0.05 sec using only isokinetic and squat strength data. However, a single regression equation for predicting both 20-m acceleration and maximum velocity run times from isokinetic or squat tests was not found. The regression analysis indicated that hip flexor strength at all test velocities was a better predictor of sprint running performance than hip extensor strength.
Resumo:
The performance of three analytical methods for multiple-frequency bioelectrical impedance analysis (MFBIA) data was assessed. The methods were the established method of Cole and Cole, the newly proposed method of Siconolfi and co-workers and a modification of this procedure. Method performance was assessed from the adequacy of the curve fitting techniques, as judged by the correlation coefficient and standard error of the estimate, and the accuracy of the different methods in determining the theoretical values of impedance parameters describing a set of model electrical circuits. The experimental data were well fitted by all curve-fitting procedures (r = 0.9 with SEE 0.3 to 3.5% or better for most circuit-procedure combinations). Cole-Cole modelling provided the most accurate estimates of circuit impedance values, generally within 1-2% of the theoretical values, followed by the Siconolfi procedure using a sixth-order polynomial regression (1-6% variation). None of the methods, however, accurately estimated circuit parameters when the measured impedances were low (<20 Omega) reflecting the electronic limits of the impedance meter used. These data suggest that Cole-Cole modelling remains the preferred method for the analysis of MFBIA data.