943 resultados para Arithmetic.
Resumo:
The primary aim of this study was to determine the relationship between telomere length and age in a range of marine invertebrates including abalone (Haliotis spp) oysters (Saccostrea glomerata), spiny lobsters (Sagmariasus verreauxi formerly Jasus verreauxi and Jasus edwardsii) and school prawns (Metapenaeus macleayi). Additionally, this relationship was studied in a vertebrate organism using the freshwater fish Silver perch (Bidyanus bidyanus). Telomere length differences between tissues were also examined in some species such as Saccostrea glomerata, Sagmariasus verreauxi and Bidyanus bidyanus. In some cases cultured specimens of known age were used and this is quoted in the spreadsheets. For other wild-caught specimens where age was not known, size was used as a proxy for age. This may be a broad size class, or be determined by shell size or carapace length depending on the organism. Each spreadsheet contains raw data of telomere length estimates from Terminal Restriction Fragment Assays (TRF) for various individuals of each species including appropriate details such as age or size and tissue. Telomere length estimates are given in base pairs (bp). In most cases replicate experiments were conducted on groups of samples three times but on a small number of occasions only two replicate experiments were conducted. Further description of the samples can be found in final report of FRDC 2007/033. The arithmetic average for each individual (sample ID) across the two or three replicate experiments is also given. Bidyanus bidyanus (SilverPerch) Two sheets are contained within. a) Comparison of telomere length between different tissues (heart, liver and muscle) within the three year old age class - two replicate experiments were conducted. b) Comparison of telomere length between fish of different but known ages (0.25, 1, 2, and 3 years old) in each of three tissues, heart, liver and muscle – three replicate experiments were conducted per tissue. Haliotis spp (Abalone species) Three species were tested. H. asinina Telomere length was compared in two age classes-11 month and 18 month old abalone using muscle tissue from the foot. Within gel-variation was also estimated using a single sample run three times on one gel (replicate experiment). H. laevigata x H. rubra hybrids Telomere length was compared in three known age classes – two, three and four years old using muscle tissue from the foot. H. rubra Telomere length was compared in a range of different sized abalone using muscle tissue from the foot. Shell size is also given for each abalone Saccostrea glomerata Three sheets are contained within the file. a) Samples came from Moreton Bay Queensland in 2007. Telomere length was compared in two tissues (gill and mantle) of oysters in three age groups (1, 3 and 4 years) b) Samples came from Moreton Bay Queensland in 2009. Telomere length was compared in three age classes using DNA from gill tissue only c) Samples came from Wallis Lake, New South Wales. Telomere length was estimated from whole body minus the shell from 1 year old oysters, gill tissue of 3 age classes (1.5 years, 3 and 4 years), mantle tissue of two age classes (3 and 4 years). Sagmariasus verreauxi (formerly Jasus verreauxi) Telomere length was estimated from abdomen tissue of puerulus, gill and muscle tissue of 3 year old, large and very large size classes of lobsters. Jasus edwardsii Telomere length was measured in two size classes of lobsters- adults of varying sizes using muscle tissue and puerulus using tissues from the abdomen minus the exoskeleton. Metapenaeus macleayi Telomere length was measured in three size classes of school prawns adults. Muscle tissue was used, minus the exoskeleton.
Resumo:
A global recursive bisection algorithm is described for computing the complex zeros of a polynomial. It has complexityO(n 3 p) wheren is the degree of the polynomial andp the bit precision requirement. Ifn processors are available, it can be realized in parallel with complexityO(n 2 p); also it can be implemented using exact arithmetic. A combined Wilf-Hansen algorithm is suggested for reduction in complexity.
Resumo:
An important question which has to be answered in evaluting the suitability of a microcomputer for a control application is the time it would take to execute the specified control algorithm. In this paper, we present a method of obtaining closed-form formulas to estimate this time. These formulas are applicable to control algorithms in which arithmetic operations and matrix manipulations dominate. The method does not require writing detailed programs for implementing the control algorithm. Using this method, the execution times of a variety of control algorithms on a range of 16-bit mini- and recently announced microcomputers are calculated. The formulas have been verified independently by an analysis program, which computes the execution time bounds of control algorithms coded in Pascal when they are run on a specified micro- or minicomputer.
Resumo:
An error-free computational approach is employed for finding the integer solution to a system of linear equations, using finite-field arithmetic. This approach is also extended to find the optimum solution for linear inequalities such as those arising in interval linear programming probloms.
Resumo:
Nuclear magnetic resonance (NMR) spectroscopy provides us with many means to study biological macromolecules in solution. Proteins in particular are the most intriguing targets for NMR studies. Protein functions are usually ascribed to specific three-dimensional structures but more recently tails, long loops and non-structural polypeptides have also been shown to be biologically active. Examples include prions, -synuclein, amylin and the NEF HIV-protein. However, conformational preferences in coil-like molecules are difficult to study by traditional methods. Residual dipolar couplings (RDCs) have opened up new opportunities; however their analysis is not trivial. Here we show how to interpret RDCs from these weakly structured molecules. The most notable residual dipolar couplings arise from steric obstruction effects. In dilute liquid crystalline media as well as in anisotropic gels polypeptides encounter nematogens. The shape of a polypeptide conformation limits the encounter with the nematogen. The most elongated conformations may come closest whereas the most compact remain furthest away. As a result there is slightly more room in the solution for the extended than for the compact conformations. This conformation-dependent concentration effect leads to a bias in the measured data. The measured values are not arithmetic averages but essentially weighted averages over conformations. The overall effect can be calculated for random flight chains and simulated for more realistic molecular models. Earlier there was an implicit thought that weakly structured or non-structural molecules would not yield to any observable residual dipolar couplings. However, in the pioneering study by Shortle and Ackerman RDCs were clearly observed. We repeated the study for urea-denatured protein at high temperature and also observed indisputably RDCs. This was very convincing to us but we could not possibly accept the proposed reason for the non-zero RDCs, namely that there would be some residual structure left in the protein that to our understanding was fully denatured. We proceeded to gain understanding via simulations and elementary experiments. In measurements we used simple homopolymers with only two labelled residues and we simulated the data to learn more about the origin of RDCs. We realized that RDCs depend on the position of the residue as well as on the length of the polypeptide. Investigations resulted in a theoretical model for RDCs from coil-like molecules. Later we extended the studies by molecular dynamics. Somewhat surprisingly the effects are small for non-structured molecules whereas the bias may be large for a small compact protein. All in all the work gave clear and unambiguous results on how to interpret RDCs as structural and dynamic parameters of weakly structured proteins.
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
Resumo:
In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.
Resumo:
This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.
Resumo:
This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.
Resumo:
A symmetrizer of the matrix A is a symmetric solution X that satisfies the matrix equation XA=AprimeX. An exact matrix symmetrizer is computed by obtaining a general algorithm and superimposing a modified multiple modulus residue arithmetic on this algorithm. A procedure based on computing a symmetrizer to obtain a symmetric matrix, called here an equivalent symmetric matrix, whose eigenvalues are the same as those of a given real nonsymmetric matrix is presented.
Resumo:
Three dimensional clipping is a critical component of the 3D graphics pipeline. A new 3D clipping algorithm is presented in this paper. An efficient 2D clipping routine reported earlier has been used as a submodule. This algorithm uses a new classification scheme for lines of all possible orientations with respect to a rectangular parallelopiped view volume. The performance of this algorithm has been evaluated using exact arithmetic operation counts. It is shown that our algorithm requires less arithmetic operations than the Cyrus-Beck 3D clipping algorithm in all cases. It is also shown that for lines that intersect the clipping volume, our algorithm performs better than the Liang-Barsky 3D clipping algorithm.
Resumo:
Let O be a monomial curve in the affine algebraic e-space over a field K and P be the relation ideal of O. If O is defined by a sequence of e positive integers some e - 1 of which form an arithmetic sequence then we construct a minimal set of generators for P and write an explicit formula for mu(P).
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.
Resumo:
Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.
Resumo:
A linear programming problem in an inequality form having a bounded solution is solved error-free using an algorithm that sorts the inequalities, removes the redundant ones, and uses the p-adic arithmetic. (C) Elsevier Science Inc., 1997