18 resultados para fixed-point arithmetic

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dual Carrier Modulation (DCM) is currently used as the higher data rate modulation scheme for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) in the ECMA-368 defined Ultra-Wideband (UWB) radio platform. ECMA-368 has been chosen as the physical radio platform for many systems including Wireless USB (W-USB), Bluetooth 3.0 and Wireless HDMI; hence ECMA-368 is an important issue to consumer electronics and the user’s experience of these products. In this paper, Log Likelihood Ratio (LLR) demapping method is used for the DCM demaper implemented in fixed point model. Channel State Information (CSI) aided scheme coupled with the band hopping information is used as the further technique to improve the DCM demapping performance. The receiver performance for the fixed point DCM is simulated in realistic multi-path environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 754 floating-point arithmetic is widely used in modern, general-purpose computers. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. Modifying the IEEE arithmetic so that it uses transreal arithmetic has a number of advantages. It removes one redundant binade from IEEE floating-point objects, doubling the numerical precision of the arithmetic. It removes eight redundant, relational,floating-point operations and removes the redundant total order operation. It replaces the non-reflexive, floating-point, equality operator with a reflexive equality operator and it indicates that some of the exceptions may be removed as redundant { subject to issues of backward compatibility and transient future compatibility as programmers migrate to the transreal paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transreal arithmetic is a total arithmetic that contains real arithmetic, but which has no arithmetical exceptions. It allows the specification of the Universal Perspex Machine which unifies geometry with the Turing Machine. Here we axiomatise the algebraic structure of transreal arithmetic so that it provides a total arithmetic on any appropriate set of numbers. This opens up the possibility of specifying a version of floating-point arithmetic that does not have any arithmetical exceptions and in which every number is a first-class citizen. We find that literal numbers in the axioms are distinct. In other words, the axiomatisation does not require special axioms to force non-triviality. It follows that transreal arithmetic must be defined on a set of numbers that contains{-8,-1,0,1,8,&pphi;} as a proper subset. We note that the axioms have been shown to be consistent by machine proof.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The results from three types of study with broilers, namely nitrogen (N) balance, bioassays and growth experiments, provided the data used herein. Sets of data on N balance and protein accretion (bioassay studies) were used to assess the ability of the monomolecular equation to describe the relationship between (i) N balance and amino acid (AA) intake and (ii) protein accretion and AA intake. The model estimated the levels of isoleucine, lysine, valine, threonine, methionine, total sulphur AAs and tryptophan resulting in zero balance to be 58, 59, 80, 96, 23, 85 and 32 mg/kg live weight (LW)/day, respectively. These estimates show good agreement with those obtained in previous studies. For the growth experiments, four models, specifically re-parameterized for analysing energy balance data, were evaluated for their ability to determine crude protein (CP) intake at maintenance and efficiency of utilization of CP intake for producing gain. They were: a straight line, two equations representing diminishing returns behaviour (monomolecular and rectangular hyperbola) and one equation describing smooth sigmoidal behaviour with a fixed point of inflexion (Gompertz). The estimates of CP requirement for maintenance and efficiency of utilization of CP intake for producing gain varied from 5.4 to 5.9 g/kg LW/day and 0.60 to 0.76, respectively, depending on the models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A total of 86 profiles from meat and egg strains of chickens (male and female) were used in this study. Different flexible growth functions were evaluated with regard to their ability to describe the relationship between live weight and age and were compared with the Gompertz and logistic equations, which have a fixed point of inflection. Six growth functions were used: Gompertz, logistic, Lopez, Richards, France, and von Bertalanffy. A comparative analysis was carried out based on model behavior and statistical performance. The results of this study confirmed the initial concern about the limitation of a fixed point of inflection, such as in the Gompertz equation. Therefore, consideration of flexible growth functions as an alternatives to the simpler equations (with a fixed point of inflection) for describing the relationship between live weight and age are recommended for the following reasons: they are easy to fit, they very often give a closer fit to data points because of their flexibility and therefore a smaller RSS value, than the simpler models, and they encompasses simpler models for the addition of an extra parameter, which is especially important when the behavior of a particular data set is not defined previously.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data from six studies with male broilers fed diets covering a wide range of energy and protein were used in the current two analyses. In the first analysis, five models, specifically re-parameterized for analysing energy balance data, were evaluated for their ability to determine metabolizable energy intake at maintenance and efficiency of utilization of metabolizable energy intake for producing gain. In addition to the straight line, two types of functional form were used. They were forms describing (i) diminishing returns behaviour (monomolecular and rectangular hyperbola) and (ii) sigmoidal behaviour with a fixed point of inflection (Gompertz and logistic). These models determined metabolizable energy requirement for maintenance to be in the range 437-573 kJ/kg of body weight/day depending on the model. The values determined for average net energy requirement for body weight gain varied from 7(.)9 to 11(.)2 kJ/g of body weight. These values show good agreement with previous studies. In the second analysis, three types of function were assessed as candidates for describing the relationship between body weight and cumulative metabolizable energy intake. The functions used were: (a) monomolecular (diminishing returns behaviour), (b) Gompertz (smooth sigmoidal behaviour with a fixed point of inflection) and (c) Lopez, France and Richards (diminishing returns and sigmoidal behaviour with a variable point of inflection). The results of this analysis demonstrated that equations capable of mimicking the law of diminishing returns describe accurately the relationship between body weight and cumulative metabolizable energy intake in broilers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The suitability of models specifically re-parameterized for analyzing energy balance data relating metabolizable energy intake to growth rate has recently been investigated in male broilers. In this study, the more adequate of those models was applied to growing turkeys to provide estimates of their energy needs for maintenance and growth. Three functional forms were used. They were: two equations representing diminishing returns behaviour (monomolecular and rectangular hyperbola); and one equation describing smooth sigmoidal behaviour with a fixed point of inflexion (Gompertz). The models estimated the metabolizable energy requirement for maintenance in turkeys to be 359-415 kJ/kg of live-weight/day. The predicted values of average net energy requirement for producing 1 g of gain in live-weight, between 1 and 4 times maintenance, varied from 8.7 to 10.9 kJ. These results and those previously reported for broilers are a basis for accepting the general validity of these models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The transreal numbers are a total number system in which even, arithmetical operation is well defined even-where. This has many benefits over the real numbers as a basis for computation and, possibly, for physical theories. We define the topology of the transreal numbers and show that it gives a more coherent interpretation of two's complement arithmetic than the conventional integer model. Trans-two's-complement arithmetic handles the infinities and 0/0 more coherently, and with very much less circuitry, than floating-point arithmetic. This reduction in circuitry is especially beneficial in parallel computers, such as the Perspex machine, and the increase in functionality makes Digital Signal Processing chips better suited to general computation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relative fast processing speed requirements in Wireless Personal Area Network (WPAN) consumer based products are often in conflict with their low power and cost requirements. In order to solve this conflict the efficiency and cost effectiveness of these products and the underlying functional modules become paramount. This paper presents a low-cost, simple, yet high performance solution for the receiver Channel Estimator and Equalizer for the Mutiband OFDM (MB-OFDM) system, particularly directed to the WiMedia Consortium Physical Later (ECMA-368) consumer implementation for Wireless-USB and Fast Bluetooth. In this paper, the receiver fixed point performance is measured and the results indicate excellent performance compared to the current literature(1).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Perspex Machine arose from the unification of computation with geometry. We now report significant redevelopment of both a partial C compiler that generates perspex programs and of a Graphical User Interface (GUI). The compiler is constructed with standard compiler-generator tools and produces both an explicit parse tree for C and an Abstract Syntax Tree (AST) that is better suited to code generation. The GUI uses a hash table and a simpler software architecture to achieve an order of magnitude speed up in processing and, consequently, an order of magnitude increase in the number of perspexes that can be manipulated in real time (now 6,000). Two perspex-machine simulators are provided, one using trans-floating-point arithmetic and the other using transrational arithmetic. All of the software described here is available on the world wide web. The compiler generates code in the neural model of the perspex. At each branch point it uses a jumper to return control to the main fibre. This has the effect of pruning out an exponentially increasing number of branching fibres, thereby greatly increasing the efficiency of perspex programs as measured by the number of neurons required to implement an algorithm. The jumpers are placed at unit distance from the main fibre and form a geometrical structure analogous to a myelin sheath in a biological neuron. Both the perspex jumper-sheath and the biological myelin-sheath share the computational function of preventing cross-over of signals to neurons that lie close to an axon. This is an example of convergence driven by similar geometrical and computational constraints in perspex and biological neurons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The motion of a spring, initially hanging in equilibrium from a fixed point with a mass attached to it, when it is detached from the fixed point, is considered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The DNA G-qadruplexes are one of the targets being actively explored for anti-cancer therapy by inhibiting them through small molecules. This computational study was conducted to predict the binding strengths and orientations of a set of novel dimethyl-amino-ethyl-acridine (DACA) analogues that are designed and synthesized in our laboratory, but did not diffract in Synchrotron light.Thecrystal structure of DNA G-Quadruplex(TGGGGT)4(PDB: 1O0K) was used as target for their binding properties in our studies.We used both the force field (FF) and QM/MM derived atomic charge schemes simultaneously for comparing the predictions of drug binding modes and their energetics. This study evaluates the comparative performance of fixed point charge based Glide XP docking and the quantum polarized ligand docking schemes. These results will provide insights on the effects of including or ignoring the drug-receptor interfacial polarization events in molecular docking simulations, which in turn, will aid the rational selection of computational methods at different levels of theory in future drug design programs. Plenty of molecular modelling tools and methods currently exist for modelling drug-receptor or protein-protein, or DNA-protein interactionssat different levels of complexities.Yet, the capasity of such tools to describevarious physico-chemical propertiesmore accuratelyis the next step ahead in currentresearch.Especially, the usage of most accurate methods in quantum mechanics(QM) is severely restricted by theirtedious nature. Though the usage of massively parallel super computing environments resulted in a tremendous improvement in molecular mechanics (MM) calculations like molecular dynamics,they are still capable of dealing with only a couple of tens to hundreds of atoms for QM methods. One such efficient strategy that utilizes thepowers of both MM and QM are the QM/MM hybrid methods. Lately, attempts have been directed towards the goal of deploying several different QM methods for betterment of force field based simulations, but with practical restrictions in place. One of such methods utilizes the inclusion of charge polarization events at the drug-receptor interface, that is not explicitly present in the MM FF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.