928 resultados para Decimal numbers and fractional numbers
Resumo:
This work is concerned with finite volume methods for flows at low mach numbers which are under buoyancy and heat sources. As a particular application, fires in car tunnels will be considered. To extend the scheme for compressible flow into the low Mach number regime, a preconditioning technique is used and a stability result on this is proven. The source terms for gravity and heat are incorporated using operator splitting and the resulting method is analyzed.
Resumo:
Exercises, exams and solutions for a third year maths course.
Resumo:
Report of a systematic review of Mersenne numbers 2^p-1 for p < 62982.
Resumo:
Reports the factor-filtering and primality-testing of Mersenne Numbers Mp for p < 100000, the latter using the ICL 'DAP' Distributed Array Processor.
Resumo:
This document provides and comments on the results of the Lucas-Lehmer testing and/or partial factorisation of all Mersenne Numbers Mp = 2^p-1 where p is prime and less than 100,000. Previous computations have either been confirmed or corrected. The LLT computations on the ICL DAP is the first implementation of Fast-Fermat-Number-Transform multiplication in connection with Mersenne Number testing. This paper championed the disciplines of systematically testing the Mp, and of double-sourcing results which were not manifestly correct. Both disciplines were adopted by the later GIMPS initiative, the 'Great Internet Mersenne Prime Search, which was itself one of the first web-based distributed-community projects.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
A mapping between chains in the Protein Databank and Enzyme Classification numbers is invaluable for research into structure-function relationships. Mapping at the chain level is a non-trivial problem and we present an automatically updated Web-server, which provides this link in a queryable form and as a downloadable XML or flat file.
Resumo:
Numbers of leucocytes in squirrels with gametocytes of Hepatozoon in their blood (infected) were compared with animals without gametocytes (uninfected). Typical values for leucocytes/mm(3) blood in uninfected squirrels were: leucocytes 5-7 x 10(3), granulocytes 3-4 x 10(3), lymphocytes 2-0 x 10(3) and monocytes 0-3 x 10(3) cells. Infection caused an increase in monocytes, lymphocytes and granulocytes, and there was a significant positive association between parasitaemia level and numbers of both total leucocytes and monocytes. Infected animals had more uninfected monocytes/mm(3) blood than did uninfected animals. The proportions of monocytes were more variable over time in infected animals, but no shift between infected and uninfected status was detected. Transfer of serum from infected squirrels to mice resulted in elevated counts of total blood leucocytes, monocytes and granulocytes, but not of lymphocytes, as compared with controls. Serum from squirrels with high parasitaemias had a more marked effect than serum from squirrels with low parasitaemias. Results indicate an infection - related monocytosis, possibly controlled by cytokines, that increases the number of cells available for invasion by gametocytes, thus enhancing the chances of parasite transmission.
Resumo:
Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.
Resumo:
The transreal numbers are a total number system in which even, arithmetical operation is well defined even-where. This has many benefits over the real numbers as a basis for computation and, possibly, for physical theories. We define the topology of the transreal numbers and show that it gives a more coherent interpretation of two's complement arithmetic than the conventional integer model. Trans-two's-complement arithmetic handles the infinities and 0/0 more coherently, and with very much less circuitry, than floating-point arithmetic. This reduction in circuitry is especially beneficial in parallel computers, such as the Perspex machine, and the increase in functionality makes Digital Signal Processing chips better suited to general computation.
Resumo:
A processing system comprises: input means arranged to receive at least one input group of bits representing at least one respective input number; output means arranged to output at least one output group of bits representing at least one respective output number; and processing means arranged to perform an operation on the at least one input group of bits to produce the at least one output group of bits such that the at least one output number is related to the at least one input number by a mathematical operation; and wherein each of the numbers can be any of a set of numbers which includes a series of numbers, positive infinity, negative infinity and nullity.
Resumo:
Objective To examine the impact of increasing numbers of metabolic syndrome (MetS) components on postprandial lipaemia. Methods Healthy men (n = 112) underwent a sequential meal postprandial investigation, in which blood samples were taken at regular intervals after a test breakfast (0 min) and lunch (330 min). Lipids and glucose were measured in the fasting sample, with triacylglycerol (TAG), non-esterified fatty acids and glucose analysed in the postprandial samples. Results Subjects were grouped according to the number of MetS components regardless of the combinations of components (0/1, 2, 3 and 4/5). As expected, there was a trend for an increase in body mass index, blood pressure, fasting TAG, glucose and insulin, and a decrease in fasting high-density lipoprotein cholesterol with increasing numbers of MetS components (P≤0.0004). A similar trend was observed for the summary measures of the postprandial TAG and glucose responses. For TAG, the area under the curve (AUC) and maximum concentration (maxC) were significantly greater in men with ≥ 3 than < 3 components (P < 0.001), whereas incremental AUC was greater in those with 3 than 0/1 and 2, and 4/5 compared with 2 components (P < 0.04). For glucose, maxC after the test breakfast (0-330 min) and total AUC (0-480 min) were higher in men with ≥ 3 than < 3 components (P≤0.001). Conclusions Our data analysis has revealed a linear trend between increasing numbers of MetS components and magnitude (AUC) of the postprandial TAG and glucose responses. Furthermore, the two meal challenge discriminated a worsening of postprandial lipaemic control in subjects with ≥ 3 MetS components.
Resumo:
We investigate the super-Brownian motion with a single point source in dimensions 2 and 3 as constructed by Fleischmann and Mueller in 2004. Using analytic facts we derive the long time behavior of the mean in dimension 2 and 3 thereby complementing previous work of Fleischmann, Mueller and Vogt. Using spectral theory and martingale arguments we prove a version of the strong law of large numbers for the two dimensional superprocess with a single point source and finite variance.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.