31 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
em University of Queensland eSpace - Australia
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Mixed confined and unconfined groundwater flow occurs in a bounded initially dry aquifer when the hydraulic head at the side boundary suddenly rises above the elevation of the aquifer's top boundary. The flow problem as modelled by the Boussinesq equation is non-trivial because of the involvement of two moving boundaries. The transformed equation (based on a similarity transformation) can, however, be dealt with more easily. Here, we present an approximate analytical solution for this flow problem. The approximate solution is compared with an 'exact' numerical solution and found to be a very accurate description for describing the mixed confined and unconfined flow in the confined aquifer. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
Smoothing the potential energy surface for structure optimization is a general and commonly applied strategy. We propose a combination of soft-core potential energy functions and a variation of the diffusion equation method to smooth potential energy surfaces, which is applicable to complex systems such as protein structures; The performance of the method was demonstrated by comparison with simulated annealing using the refinement of the undecapeptide Cyclosporin A as a test case. Simulations were repeated many times using different initial conditions and structures since the methods are heuristic and results are only meaningful in a statistical sense.
Resumo:
We present Ehrenfest relations for the high temperature stochastic Gross-Pitaevskii equation description of a trapped Bose gas, including the effect of growth noise and the energy cutoff. A condition for neglecting the cutoff terms in the Ehrenfest relations is found which is more stringent than the usual validity condition of the truncated Wigner or classical field method-that all modes are highly occupied. The condition requires a small overlap of the nonlinear interaction term with the lowest energy single particle state of the noncondensate band, and gives a means to constrain dynamical artefacts arising from the energy cutoff in numerical simulations. We apply the formalism to two simple test problems: (i) simulation of the Kohn mode oscillation for a trapped Bose gas at zero temperature, and (ii) computing the equilibrium properties of a finite temperature Bose gas within the classical field method. The examples indicate ways to control the effects of the cutoff, and that there is an optimal choice of plane wave basis for a given cutoff energy. This basis gives the best reproduction of the single particle spectrum, the condensate fraction and the position and momentum densities.
Resumo:
To account for the preponderance of zero counts and simultaneous correlation of observations, a class of zero-inflated Poisson mixed regression models is applicable for accommodating the within-cluster dependence. In this paper, a score test for zero-inflation is developed for assessing correlated count data with excess zeros. The sampling distribution and the power of the test statistic are evaluated by simulation studies. The results show that the test statistic performs satisfactorily under a wide range of conditions. The test procedure is further illustrated using a data set on recurrent urinary tract infections. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Calcium oxide has been identified to be one of the best candidates for CO2 capture in zero-emission power-generation systems. However, it suffers a well-known problem of loss-in-capacity (i.e., its capacity of CO2 capture decreases after it undergoes cycles of carbonation/decarbonation). This problem is a potential obstacle to the adoption of the new technologies. This paper proposes a method of fabricating a CaO-based adsorbent without the problem of loss-in-capacity. An adsorbent was fabricated using the method and tested on a thermogravimetric analyzer. It was shown that the sorbent attained a utilization efficiency of more than 90% after 9 cycles of carbonation/decarbonation.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.
Resumo:
An approximate analytical technique employing a finite integral transform is developed to solve the reaction diffusion problem with Michaelis-Menten kinetics in a solid of general shape. A simple infinite series solution for the substrate concentration is obtained as a function of the Thiele modulus, modified Sherwood number, and Michaelis constant. An iteration scheme is developed to bring the approximate solution closer to the exact solution. Comparison with the known exact solutions for slab geometry (quadrature) and numerically exact solutions for spherical geometry (orthogonal collocation) shows excellent agreement for all values of the Thiele modulus and Michaelis constant.
Resumo:
A reversible linear master equation model is presented for pressure- and temperature-dependent bimolecular reactions proceeding via multiple long-lived intermediates. This kinetic treatment, which applies when the reactions are measured under pseudo-first-order conditions, facilitates accurate and efficient simulation of the time dependence of the populations of reactants, intermediate species and products. Detailed exploratory calculations have been carried out to demonstrate the capabilities of the approach, with applications to the bimolecular association reaction C3H6 + H reversible arrow C3H7 and the bimolecular chemical activation reaction C2H2 +(CH2)-C-1--> C3H3+H. The efficiency of the method can be dramatically enhanced through use of a diffusion approximation to the master equation, and a methodology for exploiting the sparse structure of the resulting rate matrix is established.
Resumo:
Heat transfer and entropy generation analysis of the thermally developing forced convection in a porous-saturated duct of rectangular cross-section, with walls maintained at a constant and uniform heat flux, is investigated based on the Brinkman flow model. The classical Galerkin method is used to obtain the fully developed velocity distribution. To solve the thermal energy equation, with the effects of viscous dissipation being included, the Extended Weighted Residuals Method (EWRM) is applied. The local (three dimensional) temperature field is solved by utilizing the Green’s function solution based on the EWRM where symbolic algebra is being used for convenience in presentation. Following the computation of the temperature field, expressions are presented for the local Nusselt number and the bulk temperature as a function of the dimensionless longitudinal coordinate, the aspect ratio, the Darcy number, the viscosity ratio, and the Brinkman number. With the velocity and temperature field being determined, the Second Law (of Thermodynamics) aspect of the problem is also investigated. Approximate closed form solutions are also presented for two limiting cases of MDa values. It is observed that decreasing the aspect ratio and MDa values increases the entropy generation rate.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
Intelligence (IQ) can be seen as the efficiency of mental processes or cognition, as can basic information processing (IP) tasks like those used in our ongoing Memory, Attention and Problem Solving (MAPS) study. Measures of IQ and IP are correlated and both have a genetic component, so we are studying how the genetic variance in IQ is related to the genetic variance in IP. We measured intelligence with five subscales of the Multidimensional Aptitude Battery (MAB). The IP tasks included four variants of choice reaction time (CRT) and a visual inspection time (IT). The influence of genetic factors on the variances in each of the IQ, IP, and IT tasks was investigated in 250 identical and nonidentical twin pairs aged 16 years. For a subset of 50 pairs we have test–retest data that allow us to estimate the stability of the measures. MX was used for a multivariate genetic analysis that addresses whether the variance in IQ and IP measures is possibly mediated by common genetic factors. Analyses that show the modeled genetic and environmental influences on these measures of cognitive efficiency will be presented and their relevance to ideas on intelligence will be discussed.