36 resultados para Prime numbers
Resumo:
Prebiotics are nondigestible food ingredients that encourage proliferation of selected groups of the colonic microflora, thereby altering the composition toward a more beneficial community. In the present study, the prebiotic potential of a novel galactooligosaccharide (GOS) mixture, produced by the activity of galactosyltransferases from Bifidobacterium bifidum 41171 on lactose, was assessed in vitro and in a parallel continuous randomized pig trial. In situ fluorescent hybridization with 16S rRNA-targeted probes was used to investigate changes in total bacteria, bifidobacteria, lactobacilli, bacteroides, and Clostridium histolyticum group in response to supplementing the novel GOS mixture. In a 3-stage continuous culture system, the bifidobacterial numbers for the first 2 vessels, which represented the proximal and traverse colon, increased (P < 0.05) after the addition of the oligosaccharide mixture. In addition, the oligosaccharide mixture strongly inhibited the attachment of enterohepatic Escherichia coli (P < 0.01) and Salmonella enterica serotype Typhimurium (P < 0.01) to HT29 cells. Addition of the novel mixture at 4% (wt:wt) to a commercial diet increased the density of bificlobacteria (P < 0.001) and the acetate concentration (P < 0.001), and decreased the pH (P < 0.001) compared with the control diet and the control diet supplemented with inulin, suggesting a great prebiotic potential for the novel oligosaccharide mixture. J. Nutr. 135: 1726-1731, 2005.
Resumo:
Three experiments investigated the influence of implicit memory for familiar brand names on consumer choice. Priming was measured using modified preference judgment tasks that comprised both brand consideration and choice components. Experiment 1 used a 'complex choice task' where consideration and choice stages were characterized as acting in sequence. Experiment 2 explored a different formulation whereby consideration and choice were assumed to act in parallel, Both experiments demonstrated that priming had an influence on brand consideration but not on final or preferred choice. Finally, Experiment 3 replicated and extended these findings under more realistic conditions where participants actually received some of the products that they selected. Overall, the experiments suggested that for many decisions involving the consideration of familiar brands prior to choice, previous exposure to brand names can increase the likelihood that they will enter the consumers' consideration set. However, the advantage does not appear to extend to choice itself. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The transreal numbers are a total number system in which even, arithmetical operation is well defined even-where. This has many benefits over the real numbers as a basis for computation and, possibly, for physical theories. We define the topology of the transreal numbers and show that it gives a more coherent interpretation of two's complement arithmetic than the conventional integer model. Trans-two's-complement arithmetic handles the infinities and 0/0 more coherently, and with very much less circuitry, than floating-point arithmetic. This reduction in circuitry is especially beneficial in parallel computers, such as the Perspex machine, and the increase in functionality makes Digital Signal Processing chips better suited to general computation.
Resumo:
A processing system comprises: input means arranged to receive at least one input group of bits representing at least one respective input number; output means arranged to output at least one output group of bits representing at least one respective output number; and processing means arranged to perform an operation on the at least one input group of bits to produce the at least one output group of bits such that the at least one output number is related to the at least one input number by a mathematical operation; and wherein each of the numbers can be any of a set of numbers which includes a series of numbers, positive infinity, negative infinity and nullity.
Resumo:
The major component of skeletal muscle is the myofibre. Genetic intervention inducing over-enlargement of myofibres beyond a certain threshold through acellular growth causes a reduction in the specific tension generating capacity of the muscle. However the physiological parameters of a genetic model that harbours reduced skeletal muscle mass have yet to be analysed. Genetic deletion of Meox2 in mice leads to reduced limb muscle size and causes some patterning defects. The loss of Meox2 is not embryonically lethal and a small percentage of animals survive to adulthood making it an excellent model with which to investigate how skeletal muscle responds to reductions in mass. In this study we have performed a detailed analysis of both late foetal and adult muscle development in the absence of Meox2. In the adult, we show that the loss of Meox2 results in smaller limb muscles that harbour reduced numbers of myofibres. However, these fibres are enlarged. These myofibres display a molecular and metabolic fibre type switch towards a more oxidative phenotype that is induced through abnormalities in foetal fibre formation. In spite of these changes, the muscle from Meox2 mutant mice is able to generate increased levels of specific tension compared to that of the wild type.
Resumo:
We study generalised prime systems (both discrete and continuous) for which the `integer counting function' N(x) has the property that N(x) ¡ cx is periodic for some c > 0. We show that this is extremely rare. In particular, we show that the only such system for which N is continuous is the trivial system with N(x) ¡ cx constant, while if N has finitely many discontinuities per bounded interval, then N must be the counting function of the g-prime system containing the usual primes except for finitely many. Keywords and phrases: Generalised prime systems. I
Resumo:
Objective To examine the impact of increasing numbers of metabolic syndrome (MetS) components on postprandial lipaemia. Methods Healthy men (n = 112) underwent a sequential meal postprandial investigation, in which blood samples were taken at regular intervals after a test breakfast (0 min) and lunch (330 min). Lipids and glucose were measured in the fasting sample, with triacylglycerol (TAG), non-esterified fatty acids and glucose analysed in the postprandial samples. Results Subjects were grouped according to the number of MetS components regardless of the combinations of components (0/1, 2, 3 and 4/5). As expected, there was a trend for an increase in body mass index, blood pressure, fasting TAG, glucose and insulin, and a decrease in fasting high-density lipoprotein cholesterol with increasing numbers of MetS components (P≤0.0004). A similar trend was observed for the summary measures of the postprandial TAG and glucose responses. For TAG, the area under the curve (AUC) and maximum concentration (maxC) were significantly greater in men with ≥ 3 than < 3 components (P < 0.001), whereas incremental AUC was greater in those with 3 than 0/1 and 2, and 4/5 compared with 2 components (P < 0.04). For glucose, maxC after the test breakfast (0-330 min) and total AUC (0-480 min) were higher in men with ≥ 3 than < 3 components (P≤0.001). Conclusions Our data analysis has revealed a linear trend between increasing numbers of MetS components and magnitude (AUC) of the postprandial TAG and glucose responses. Furthermore, the two meal challenge discriminated a worsening of postprandial lipaemic control in subjects with ≥ 3 MetS components.
Resumo:
We investigate the super-Brownian motion with a single point source in dimensions 2 and 3 as constructed by Fleischmann and Mueller in 2004. Using analytic facts we derive the long time behavior of the mean in dimension 2 and 3 thereby complementing previous work of Fleischmann, Mueller and Vogt. Using spectral theory and martingale arguments we prove a version of the strong law of large numbers for the two dimensional superprocess with a single point source and finite variance.
Resumo:
Systematic climate shifts have been linked to multidecadal variability in observed sea surface temperatures in the North Atlantic Ocean1. These links are extensive, influencing a range of climate processes such as hurricane activity2 and African Sahel3, 4, 5 and Amazonian5 droughts. The variability is distinct from historical global-mean temperature changes and is commonly attributed to natural ocean oscillations6, 7, 8, 9, 10. A number of studies have provided evidence that aerosols can influence long-term changes in sea surface temperatures11, 12, but climate models have so far failed to reproduce these interactions6, 9 and the role of aerosols in decadal variability remains unclear. Here we use a state-of-the-art Earth system climate model to show that aerosol emissions and periods of volcanic activity explain 76 per cent of the simulated multidecadal variance in detrended 1860–2005 North Atlantic sea surface temperatures. After 1950, simulated variability is within observational estimates; our estimates for 1910–1940 capture twice the warming of previous generation models but do not explain the entire observed trend. Other processes, such as ocean circulation, may also have contributed to variability in the early twentieth century. Mechanistically, we find that inclusion of aerosol–cloud microphysical effects, which were included in few previous multimodel ensembles, dominates the magnitude (80 per cent) and the spatial pattern of the total surface aerosol forcing in the North Atlantic. Our findings suggest that anthropogenic aerosol emissions influenced a range of societally important historical climate events such as peaks in hurricane activity and Sahel drought. Decadal-scale model predictions of regional Atlantic climate will probably be improved by incorporating aerosol–cloud microphysical interactions and estimates of future concentrations of aerosols, emissions of which are directly addressable by policy actions.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.