48 resultados para exponential
Resumo:
This article introduces a new general method for genealogical inference that samples independent genealogical histories using importance sampling (IS) and then samples other parameters with Markov chain Monte Carlo (MCMC). It is then possible to more easily utilize the advantages of importance sampling in a fully Bayesian framework. The method is applied to the problem of estimating recent changes in effective population size from temporally spaced gene frequency data. The method gives the posterior distribution of effective population size at the time of the oldest sample and at the time of the most recent sample, assuming a model of exponential growth or decline during the interval. The effect of changes in number of alleles, number of loci, and sample size on the accuracy of the method is described using test simulations, and it is concluded that these have an approximately equivalent effect. The method is used on three example data sets and problems in interpreting the posterior densities are highlighted and discussed.
Resumo:
Microsatellites are widely used in genetic analyses, many of which require reliable estimates of microsatellite mutation rates, yet the factors determining mutation rates are uncertain. The most straightforward and conclusive method by which to study mutation is direct observation of allele transmissions in parent-child pairs, and studies of this type suggest a positive, possibly exponential, relationship between mutation rate and allele size, together with a bias toward length increase. Except for microsatellites on the Y chromosome, however, previous analyses have not made full use of available data and may have introduced bias: mutations have been identified only where child genotypes could not be generated by transmission from parents' genotypes, so that the probability that a mutation is detected depends on the distribution of allele lengths and varies with allele length. We introduce a likelihood-based approach that has two key advantages over existing methods. First, we can make formal comparisons between competing models of microsatellite evolution; second, we obtain asymptotically unbiased and efficient parameter estimates. Application to data composed of 118,866 parent-offspring transmissions of AC microsatellites supports the hypothesis that mutation rate increases exponentially with microsatellite length, with a suggestion that contractions become more likely than expansions as length increases. This would lead to a stationary distribution for allele length maintained by mutational balance. There is no evidence that contractions and expansions differ in their step size distributions.
Resumo:
The role of ribosome modulation factor (RMF) in protecting heat-stressed Escherichia coli cells was identified by the observation that cultures of a mutant strain lacking functional RMF (HMY15) were highly heat sensitive in stationary phase compared to those of the parent strain (W3110). No difference in heat sensitivity was observed between these strains in exponential phase, during which RMF is not synthesised. Studies by differential scanning calorimetry demonstrated that the ribosomes of stationary-phase cultures of the mutant strain had lower thermal stability than those of the parent strain in stationary phase, or exponential-phase ribosomes. More rapid breakdown of ribosomes in the mutant strain during heating was confirmed by rRNA analysis and sucrose density gradient centrifugation. Analyses of ribosome composition showed that the 100S dimers dissociated more rapidly during heating than 70S particles. While ribosome dimerisation is a consequence of the conformational changes caused by RMF binding, it may not therefore be essential for RMF-mediated ribosome stabilisation.
Resumo:
Ribosome modulation factor (RMF) was shown to have an influence on the survival of Escherichia coli under acid stress during stationary phase, since the viability of cultures of a mutant strain lacking functional RMF decreased more rapidly than that of the parent strain at pH 3. Loss of ribosomes was observed in both strains when exposed to low pH, although this occurred at a higher rate in the RMF-deficient mutant strain, which also suffered from higher levels of rRNA degradation. It was concluded that the action of RMF in limiting the damage to rRNA contributed to the protection of E coli under acid stress. Expression of the rmf gene was lower during stationary phase after growth in acidified media compared to media containing no added acid, and the increased rmf expression associated with transition from exponential phase to stationary phase was much reduced in acidified media. It was demonstrated that RMF was not involved in the stationary-phase acid-tolerance response in E coli by which growth under acidic conditions confers protection against subsequent acid shock. This response was sufficient to overcome the increased vulnerability of the RMF-deficient mutant strain to acid stress at pH values between 6.5 and 5.5.
The activity of ribosome modulation factor during growth of Escherichia coli under acidic conditions
Resumo:
Expression of the gene encoding ribosome modulation factor (RMF), as measured using an rmf-lacZ gene fusion, increased with decreasing pH in exponential phase cultures of Escherichia coli. Expression was inversely proportional to the growth rate and independent of the acidifying agent used and it was concluded that expression of rmf was growth rate controlled in exponential phase under acid conditions. Increased rmf expression during exponential phase was not accompanied by the formation of ribosome dimers as occurs during stationary phase. Nor did it appear to have a significant effect on cell survival under acid stress since the vulnerability of an RMF-deficient mutant strain was similar to that of the parent strain. Ribosome degradation was increased in the mutant strain compared to the parent strain at pH 3.75. Also, the peptide elongation rate was reduced in the mutant strain but not the parent during growth under acid conditions. It is speculated that the function of RMF during stress-induced reduction in growth rate is two-fold: firstly to prevent reduced elongation efficiency by inactivating surplus ribosomes and thus limiting competition for available protein synthesis factors, and secondly to protect inactivated ribosomes from degradation.
Resumo:
G-protein-coupled receptors are desensitized by a two-step process. In a first step, G-protein-coupled receptor kinases (GRKs) phosphorylate agonist-activated receptors that subsequently bind to a second class of proteins, the arrestins. GRKs can be classified into three subfamilies, which have been implicated in various diseases. The physiological role(s) of GRKs have been difficult to study as selective inhibitors are not available. We have used SELEX (systematic evolution of ligands by exponential enrichment) to develop RNA aptamers that potently and selectively inhibit GRK2. This process has yielded an aptamer, C13, which bound to GRK2 with a high affinity and inhibited GRK2-catalyzed rhodopsin phosphorylation with an IC50 of 4.1 nM. Phosphorylation of rhodopsin catalyzed by GRK5 was also inhibited, albeit with 20-fold lower potency (IC50 of 79 nM). Furthermore, C13 reveals significant specificity, since almost no inhibitory activity was detectable testing it against a panel of 14 other kinases. The aptamer is two orders of magnitude more potent than the best GRK2 inhibitors described previously and shows high selectivity for the GRK family of protein kinases.
Resumo:
Rationale: Central cannabinoid systems have been implicated in appetite control through the respective hyperphagic and anorectic actions of CB1 agonists and antagonists. The motivational changes underlying these actions remain to be determined, but may involve alterations to food palatability. Objectives: The mode of action of cannabinoids on ingestion was investigated by examining the effects of exogenous and endogenous agonists, and a selective CB1 receptor antagonist, on licking microstructure in rats ingesting a palatable sucrose solution. Methods: Microstructural analyses of licking for a 10% sucrose solution was performed over a range of agonist and antagonist doses administered to non-deprived, male Lister hooded rats. Results: Delta(9)-tetrahydrocannabinol (0.5, 1 and 3 mg/kg) and anandamide (1 mg/kg and 3 mg/kg) significantly increased total number of licks. This was primarily due to an increase in bout duration rather than bout number. There was a nonsignificant increase in total licks following administration of 2-arachidonoyl glycerol (0.2, 1.0 and 2.0 mg/kg), whereas administration of the CB1 antagonist SR141716 (1 mg/kg and 3 mg/kg) significantly decreased total licks. All drugs, with the exception of anandamide, significantly decreased the intra-bout lick rate. An exponential function fitted to the cumulative lick rate curves for each drug revealed that all compounds altered the asymptote of this function without having any marked effects on the exponent. Conclusions: These data are consistent with endocannabinoid involvement in the mediation of food palatability.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
Aims: To study the development of resistance responses in Campylobacter jejuni to High Hydrostatic Pressure (HHP) treatments after the exposure to different stressful conditions that may be encountered in food processing environments, such as acid pH, elevated temperatures and cold storage. Methods and Results: C. jejuni cells in exponential and stationary growth phase were exposed to different sublethal stresses (acid, heat and cold shocks) prior to evaluate the development of resistance responses to HHP. For exponential-phase cells, neither of the conditions tested increased nor decreased HHP resistance of C. jejuni. For stationary-phase cells, acid and heat adaptation sensitized C. jejuni cells to the subsequent pressure treatment. On the contrary, cold-adapted stationary-phase cells developed resistance to HHP. Conclusions: Whereas C. jejuni can be classified as a stress sensitive microorganism, our findings have demonstrated that it can develop resistance responses under different stressing conditions. The resistance of stationary phase C. jejuni to HHP was increased after cells were exposed to cold temperatures. Significance and Impact of the Study: The results of this study contribute to a better knowledge of the physiology of C. jejuni and its survival to food preservation agents. Results here presented may help in the design of combined processes for food preservation based on HHP technology.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
We consider the classical coupled, combined-field integral equation formulations for time-harmonic acoustic scattering by a sound soft bounded obstacle. In recent work, we have proved lower and upper bounds on the $L^2$ condition numbers for these formulations, and also on the norms of the classical acoustic single- and double-layer potential operators. These bounds to some extent make explicit the dependence of condition numbers on the wave number $k$, the geometry of the scatterer, and the coupling parameter. For example, with the usual choice of coupling parameter they show that, while the condition number grows like $k^{1/3}$ as $k\to\infty$, when the scatterer is a circle or sphere, it can grow as fast as $k^{7/5}$ for a class of `trapping' obstacles. In this paper we prove further bounds, sharpening and extending our previous results. In particular we show that there exist trapping obstacles for which the condition numbers grow as fast as $\exp(\gamma k)$, for some $\gamma>0$, as $k\to\infty$ through some sequence. This result depends on exponential localisation bounds on Laplace eigenfunctions in an ellipse that we prove in the appendix. We also clarify the correct choice of coupling parameter in 2D for low $k$. In the second part of the paper we focus on the boundary element discretisation of these operators. We discuss the extent to which the bounds on the continuous operators are also satisfied by their discrete counterparts and, via numerical experiments, we provide supporting evidence for some of the theoretical results, both quantitative and asymptotic, indicating further which of the upper and lower bounds may be sharper.
Resumo:
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
Resumo:
Greater attention has been focused on the use of CDMA for future cellular mobile communications. CA near-far resistant detector for asynchronous code-division multiple-access (CDMA) systems operating in additive white Gaussian noise (AWGN) channels is presented. The multiuser interference caused by K users transmitting simultaneously, each with a specific signature sequence, is completely removed at the receiver. The complexity of this detector grows only linearly with the number of users, as compared to the optimum multiuser detector which requires exponential complexity in the number of users. A modified algorithm based on time diversity is described. It performs detection on a bit-by-bit basis and overcomes the complexity of using a sequence detector. The performance of this detector is shown to be superior to that of the conventional receiver.