56 resultados para Computer arithmetic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transreal arithmetic is a total arithmetic that contains real arithmetic, but which has no arithmetical exceptions. It allows the specification of the Universal Perspex Machine which unifies geometry with the Turing Machine. Here we axiomatise the algebraic structure of transreal arithmetic so that it provides a total arithmetic on any appropriate set of numbers. This opens up the possibility of specifying a version of floating-point arithmetic that does not have any arithmetical exceptions and in which every number is a first-class citizen. We find that literal numbers in the axioms are distinct. In other words, the axiomatisation does not require special axioms to force non-triviality. It follows that transreal arithmetic must be defined on a set of numbers that contains{-8,-1,0,1,8,&pphi;} as a proper subset. We note that the axioms have been shown to be consistent by machine proof.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BCI systems require correct classification of signals interpreted from the brain for useful operation. To this end this paper investigates a method proposed in [1] to correctly classify a series of images presented to a group of subjects in [2]. We show that it is possible to use the proposed methods to correctly recognise the original stimuli presented to a subject from analysis of their EEG. Additionally we use a verification set to show that the trained classification method can be applied to a different set of data. We go on to investigate the issue of invariance in EEG signals. That is, the brain representation of similar stimuli is recognisable across different subjects. Finally we consider the usefulness of the methods investigated towards an improved BCI system and discuss how it could potentially lead to great improvements in the ease of use for the end user by offering an alternative, more intuitive control based mode of operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a study that was conducted to learn more about how older adults use the tools in a GUI to undertake tasks in Windows applications. The objective was to gain insight into what people did and what they found most difficult. File and folder manipulation, and some aspects of formatting presented difficulties, and these were thought to be related to a lack of understanding of the task model, the correct interpretation of the visual cues presented by the interface, and the recall and translation of the task model into a suitable sequence of actions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper describes the implementation of an offline, low-cost Brain Computer Interface (BCI) alternative to more expensive commercial models. Using inexpensive general purpose clinical EEG acquisition hardware (Truscan32, Deymed Diagnostic) as the base unit, a synchronisation module was constructed to allow the EEG hardware to be operated precisely in time to allow for recording of automatically time stamped EEG signals. The synchronising module allows the EEG recordings to be aligned in stimulus time locked fashion for further processing by the classifier to establish the class of the stimulus, sample by sample. This allows for the acquisition of signals from the subject’s brain for the goal oriented BCI application based on the oddball paradigm. An appropriate graphical user interface (GUI) was constructed and implemented as the method to elicit the required responses (in this case Event Related Potentials or ERPs) from the subject.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a prototype grid infrastructure, called the eMinerals minigrid, for molecular simulation scientists. which is based on an integration of shared compute and data resources. We describe the key components, namely the use of Condor pools, Linux/Unix clusters with PBS and IBM's LoadLeveller job handling tools, the use of Globus for security handling, the use of Condor-G tools for wrapping globus job submit commands, Condor's DAGman tool for handling workflow, the Storage Resource Broker for handling data, and the CCLRC dataportal and associated tools for both archiving data with metadata and making data available to other workers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A shock capturing scheme is presented for the equations of isentropic flow based on upwind differencing applied to a locally linearized set of Riemann problems. This includes the two-dimensional shallow water equations using the familiar gas dynamics analogy. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency, leading to arithmetic averaging. This is in contrast to usual ‘square root’ averages found in this type of Riemann solver where the computational expense can be prohibitive. The scheme is applied to a two-dimensional dam-break problem and the approximate solution compares well with those given by other authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An analysis of various arithmetic averaging procedures for approximate Riemann solvers is made with a specific emphasis on efficiency and a jump capturing property. The various alternatives discussed are intended for future work, as well as the more immediate problem of steady, supercritical free-surface flows. Numerical results are shown for two test problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient numerical method is presented for the solution of the Euler equations governing the compressible flow of a real gas. The scheme is based on the approximate solution of a specially constructed set of linearised Riemann problems. An average of the flow variables across the interface between cells is required, and this is chosen to be the arithmetic mean for computational efficiency, which is in contrast to the usual square root averaging. The scheme is applied to a test problem for five different equations of state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A finite difference scheme based on flux difference splitting is presented for the solution of the two-dimensional shallow water equations of ideal fluid flow. A linearised problem, analogous to that of Riemann for gas dynamics is defined, and a scheme, based on numerical characteristic decomposition is presented for obtaining approximate solutions to the linearised problem, and incorporates the technique of operator splitting. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency leading to arithmetic averaging. This is in contrast to usual ‘square root’ averages found in this type of Riemann solver, where the computational expense can be prohibitive. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids nonphysical, spurious oscillations. An extension to the two-dimensional equations with source terms is included. The scheme is applied to the one-dimensional problems of a breaking dam and reflection of a bore, and in each case the approximate solution is compared to the exact solution of ideal fluid flow. The scheme is also applied to a problem of stationary bore generation in a channel of variable cross-section. Finally, the scheme is applied to two other dam-break problems, this time in two dimensions with one having cylindrical symmetry. Each approximate solution compares well with those given by other authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.