104 resultados para Mathematical Techniques--Error Analysis
em University of Queensland eSpace - Australia
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Resumo:
Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.
Resumo:
The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.
Resumo:
An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.
Resumo:
This article considers the question of what specific actions a teacher might take to create a culture of inquiry in a secondary school mathematics classroom. Sociocultural theories of learning provide the framework for examining teaching and learning practices in a single classroom over a two-year period. The notion of the zone of proximal development (ZPD) is invoked as a fundamental framework for explaining learning as increasing participation in a community of practice characterized by mathematical inquiry. The analysis draws on classroom observation and interviews with students and the teacher to show how the teacher established norms and practices that emphasized mathematical sense-making and justification of ideas and arguments and to illustrate the learning practices that students developed in response to these expectations.
Resumo:
Quantum computers hold great promise for solving interesting computational problems, but it remains a challenge to find efficient quantum circuits that can perform these complicated tasks. Here we show that finding optimal quantum circuits is essentially equivalent to finding the shortest path between two points in a certain curved geometry. By recasting the problem of finding quantum circuits as a geometric problem, we open up the possibility of using the mathematical techniques of Riemannian geometry to suggest new quantum algorithms or to prove limitations on the power of quantum computers.
Bias, precision and heritability of self-reported and clinically measured height in Australian twins
Resumo:
Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.
Resumo:
Performance prediction models for partial face mechanical excavators, when developed in laboratory conditions, depend on relating the results of a set of rock property tests and indices to specific cutting energy (SE) for various rock types. There exist some studies in the literature aiming to correlate the geotechnical properties of intact rocks with the SE, especially for massive and widely jointed rock environments. However, those including direct and/or indirect measures of rock fracture parameters such as rock brittleness and fracture toughness, along with the other rock parameters expressing different aspects of rock behavior under drag tools (picks), are rather limited. With this study, it was aimed to investigate the relationships between the indirect measures of rock brittleness and fracture toughness and the SE depending on the results of a new and two previous linear rock cutting programmes. Relationships between the SE, rock strength parameters, and the rock index tests have also been investigated in this study. Sandstone samples taken from the different fields around Ankara, Turkey were used in the new testing programme. Detailed mineralogical analyses, petrographic studies, and rock mechanics and rock cutting tests were performed on these selected sandstone specimens. The assessment of rock cuttability was based on the SE. Three different brittleness indices (B1, B2, and B4) were calculated for sandstones samples, whereas a toughness index (T-i), being developed by Atkinson et al.(1), was employed to represent the indirect rock fracture toughness. The relationships between the SE and the large amounts of new data obtained from the mineralogical analyses, petrographic studies, rock mechanics, and linear rock cutting tests were evaluated by using bivariate correlation and curve fitting techniques, variance analysis, and Student's t-test. Rock cutting and rock property testing data that came from well-known studies of McFeat-Smith and Fowell(2) and Roxborough and Philips(3) have also been employed in statistical analyses together with the new data. Laboratory tests and subsequent analyses revealed that there were close correlations between the SE and B4 whereas no statistically significant correlation has been found between the SE and T-i. Uniaxial compressive and Brazilian tensile strengths and Shore scleroscope hardness of sandstones also exhibited strong relationships with the SE. NCB cone indenter test had the greatest influence on the SE among the other engineering properties of rocks, confirming the previous studies in rock cutting and mechanical excavation. Therefore, it was recommended to employ easy-to-use index tests of NCB cone indenter and Shore scleroscope in the estimation of laboratory SE of sandstones ranging from very low to high strengths in the absence of a rock cutting rig to measure it until the easy-to-use universal measures of the rock brittleness and especially the rock fracture toughness, being an intrinsic rock property, are developed.
Resumo:
We present the first characterization of the mechanical properties of lysozyme films formed by self-assembly at the air-water interface using the Cambridge interfacial tensiometer (CIT), an apparatus capable of subjecting protein films to a much higher level of extensional strain than traditional dilatational techniques. CIT analysis, which is insensitive to surface pressure, provides a direct measure of the extensional stress-strain behavior of an interfacial film without the need to assume a mechanical model (e.g., viscoelastic), and without requiring difficult-to-test assumptions regarding low-strain material linearity. This testing method has revealed that the bulk solution pH from which assembly of an interfacial lysozyme film occurs influences the mechanical properties of the film more significantly than is suggested by the observed differences in elastic moduli or surface pressure. We have also identified a previously undescribed pH dependency in the effect of solution ionic strength on the mechanical strength of the lysozyme films formed at the air-water interface. Increasing solution ionic strength was found to increase lysozyme film strength when assembly occurred at pH 7, but it caused a decrease in film strength at pH 11, close to the pI of lysozyme. This result is discussed in terms of the significant contribution made to protein film strength by both electrostatic interactions and the hydrophobic effect. Washout experiments to remove protein from the bulk phase have shown that a small percentage of the interfacially adsorbed lysozyme molecules are reversibly adsorbed. Finally, the washout tests have probed the role played by additional adsorption to the fresh interface formed by the application of a large strain to the lysozyme film and have suggested the movement of reversibly bound lysozyme molecules from a subinterfacial layer to the interface.