934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
Two experiments implement and evaluate a training scheme for learning to apply frequency formats to probability judgements couched in terms of percentages. Results indicate that both conditional and cumulative probability judgements can be improved in this manner, however the scheme is insufficient to promote any deeper understanding of the problem structure. In both experiments, training on one problem type only (either conditional or cumulative risk judgements) resulted in an inappropriate transfer of a learned method at test. The obstacles facing a frequency-based training programme for teaching appropriate use of probability data are discussed. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
The Boltzmann equation in presence of boundary and initial conditions, which describes the general case of carrier transport in microelectronic devices is analysed in terms of Monte Carlo theory. The classical Ensemble Monte Carlo algorithm which has been devised by merely phenomenological considerations of the initial and boundary carrier contributions is now derived in a formal way. The approach allows to suggest a set of event-biasing algorithms for statistical enhancement as an alternative of the population control technique, which is virtually the only algorithm currently used in particle simulators. The scheme of the self-consistent coupling of Boltzmann and Poisson equation is considered for the case of weighted particles. It is shown that particles survive the successive iteration steps.
Resumo:
A one-dimensional shock-reflection test problem in the case of slab, cylindrical, or spherical symmetry is discussed. The differential equations for a similarity solution are derived and solved numerically in conjunction with the Rankie-Hugoniot shock relations.
Resumo:
An analysis of various arithmetic averaging procedures for approximate Riemann solvers is made with a specific emphasis on efficiency and a jump capturing property. The various alternatives discussed are intended for future work, as well as the more immediate problem of steady, supercritical free-surface flows. Numerical results are shown for two test problems.
Resumo:
An efficient numerical method is presented for the solution of the Euler equations governing the compressible flow of a real gas. The scheme is based on the approximate solution of a specially constructed set of linearised Riemann problems. An average of the flow variables across the interface between cells is required, and this is chosen to be the arithmetic mean for computational efficiency, which is in contrast to the usual square root averaging. The scheme is applied to a test problem for five different equations of state.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
This paper first points out the important fact that the rectangle formulas of continuous convolution discretization, which was widely used in conventional digital deconvolution algorithms, can result in zero-time error. Then, an improved digital deconvolution equation is suggested which is equivalent to the trapezoid formulas of continuous convolution discretization and can overcome the disadvantage of conventional equation satisfactorily. Finally, a simulation in computer is given, thus confirming the theoretical result.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimisation and Parameter Estimation (DISOPE) which has been designed to achieve the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A method based on Broyden's ideas is used for approximating some derivative trajectories required. Ways for handling con straints on both manipulated and state variables are described. Further, a method for coping with batch-to- batch dynamic variations in the process, which are common in practice, is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch processes. The algorithm is success fully applied to a benchmark problem consisting of the input profile optimisation of a fed-batch fermentation process.
Resumo:
A novel iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified linear quadratic model based problem with parameter updating in such a manner that the correct solution of the original non-linear problem is achieved. The resulting algorithm has a particular advantage in that the solution is achieved without the need to solve the differential algebraic equations . Convergence aspects are discussed and a simulation example is described which illustrates the performance of the technique. 1. Introduction When modelling industrial processes often the resulting equations consist of coupled differential and algebraic equations (DAEs). In many situations these equations are nonlinear and cannot readily be directly reduced to ordinary differential equations.
Resumo:
A new boundary integral operator is introduced for the solution of the soundsoft acoustic scattering problem, i.e., for the exterior problem for the Helmholtz equation with Dirichlet boundary conditions. We prove that this integral operator is coercive in L2(Γ) (where Γ is the surface of the scatterer) for all Lipschitz star-shaped domains. Moreover, the coercivity is uniform in the wavenumber k = ω/c, where ω is the frequency and c is the speed of sound. The new boundary integral operator, which we call the “star-combined” potential operator, is a slight modification of the standard combined potential operator, and is shown to be as easy to implement as the standard one. Additionally, to the authors' knowledge, it is the only second-kind integral operator for which convergence of the Galerkin method in L2(Γ) is proved without smoothness assumptions on Γ except that it is Lipschitz. The coercivity of the star-combined operator implies frequency-explicit error bounds for the Galerkin method for any approximation space. In particular, these error estimates apply to several hybrid asymptoticnumerical methods developed recently that provide robust approximations in the high-frequency case. The proof of coercivity of the star-combined operator critically relies on an identity first introduced by Morawetz and Ludwig in 1968, supplemented further by more recent harmonic analysis techniques for Lipschitz domains.
Resumo:
The problem of reconstructing the (otherwise unknown) source and sink field of a tracer in a fluid is studied by developing and testing a simple tracer transport model of a single-level global atmosphere and a dynamic data assimilation system. The source/sink field (taken to be constant over a 10-day assimilation window) and initial tracer field are analysed together by assimilating imperfect tracer observations over the window. Experiments show that useful information about the source/sink field may be determined from relatively few observations when the initial tracer field is known very accurately a-priori, even when a-priori source/sink information is biased (the source/sink a-priori is set to zero). In this case each observation provides information about the source/sink field at positions upstream and the assimilation of many observations together can reasonably determine the location and strength of a test source.
Resumo:
This article describes a number of velocity-based moving mesh numerical methods formultidimensional nonlinear time-dependent partial differential equations (PDEs). It consists of a short historical review followed by a detailed description of a recently developed multidimensional moving mesh finite element method based on conservation. Finite element algorithms are derived for both mass-conserving and non mass-conserving problems, and results shown for a number of multidimensional nonlinear test problems, including the second order porous medium equation and the fourth order thin film equation as well as a two-phase problem. Further applications and extensions are referenced.