941 resultados para Folkman Numbers
Resumo:
Exercises, exams and solutions for a third year maths course.
Resumo:
ODE pensado para alumnos de quinto curso de primaria
Resumo:
Ofrece actividades fotocopiables, juegos y rompecabezas para los alumnos de la etapa clave 2 (key stage 2) del currículo nacional de Inglaterra y Gales, es decir para el nivel de primaria. Están diseñadas para desarrollar en los niños la comprensión y la facilidad con los números y, así, proporcionarles una buena base para el desarrollo de sus habilidades matemáticas. Las actividades se clasifican en tres secciones para los grupos de edad de 7 a 9, de 8 a 10 y de 9 a 11.
Resumo:
Resumen tomado parcialmente de la revista.- El artículo forma parte de un monográfico dedicado a Psicología de las Matemáticas
Resumo:
Report of a systematic review of Mersenne numbers 2^p-1 for p < 62982.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
Reports the factor-filtering and primality-testing of Mersenne Numbers Mp for p < 100000, the latter using the ICL 'DAP' Distributed Array Processor.
Resumo:
This document provides and comments on the results of the Lucas-Lehmer testing and/or partial factorisation of all Mersenne Numbers Mp = 2^p-1 where p is prime and less than 100,000. Previous computations have either been confirmed or corrected. The LLT computations on the ICL DAP is the first implementation of Fast-Fermat-Number-Transform multiplication in connection with Mersenne Number testing. This paper championed the disciplines of systematically testing the Mp, and of double-sourcing results which were not manifestly correct. Both disciplines were adopted by the later GIMPS initiative, the 'Great Internet Mersenne Prime Search, which was itself one of the first web-based distributed-community projects.
Resumo:
Tomato plants inoculated with Meloidogyne javanica juveniles infected with Pasteuria penetrans were grown in a glasshouse (20-32degreesC) for 36, 53, 71 and 88 days and in a growth room (26-29degreesC) for 36, 53, 71 and 80 days. Over these periods the numbers of P penetrans endospores in infected M. javanica females and the weights of individual infected females increased. In the growth room, most spores (2.03 x 10(6)) were found after 71 days. However, in the glasshouse the rate of increase was slower and spore numbers were still increasing at the final sampling at 88 days (2.04 x 10(6)), as was the weight of the nematodes (72 mug). Weights of uninfected females reached a maximum of 36.2 and 43.1 mug after 71 days in the growth room and glasshouse, respectively.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.