925 resultados para exponential sums
Resumo:
A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.
Resumo:
This study was designed to determine the response of in vitro fermentation parameters to incremental levels of polyethylene glycol (PEG) when tanniniferous tree fruits (Dichrostachys cinerea, Acacia erioloba, A. erubiscens, A. nilotica and Piliostigma thonningii) were fermented using the Reading Pressure Technique. The trivalent ytterbium precipitable phenolics content of fruit substrates ranged from 175 g/kg DM in A. erubiscens to 607 g/kg DM in A. nilotica, while the soluble condensed tannin content ranged from 0.09 AU550nm/40mg in A. erioloba to 0.52 AU550nm/40 mg in D. cinerea. The ADF was highest in P. thonningii fruits (402 g/kg DM) and lowest in A. nilotica fruits (165 g/kg DM). Increasing the level of PEG caused an exponential rise to a maximum (asymptotic) for cumulative gas production, rate of gas production and nitrogen degradability in all substrates except P. thonningii fruits. Dry matter degradability for fruits containing higher levels of soluble condensed tannins (D. cinerea and P. thonningii), showed little response to incremental levels of PEG after incubation for 24 h. The minimum levels of PEG required to maximize in vitro fermentation of tree fruits was found to be 200 mg PEG/g DM of sample for all tree species except A. erubiscens fruits, which required 100 mg PEG/g DM sample. The study provides evidence that PEG levels lower than 1 g/g DM sample can be used for in vitro tannin bioassays to reduce the cost of evaluating non-conventional tanniniferous feedstuffs used in developing countries in the tropics and subtopics. The use of in vitro nitrogen degradability in place of the favoured dry matter degradability improved the accuracy of PEG as a diagnostic tool for tannins in in vitro fermentation systems.
Resumo:
Exponential spectra are found to characterize variability of the Northern Annular Mode (NAM) for periods less than 36 days. This corresponds to the observed rounding of the autocorrelation function at lags of a few days. The characteristic persistence timescales during winter and summer is found to be ∼5 days for these high frequencies. Beyond periods of 36 days the characteristic decorrelation timescale is ∼20 days during winter and ∼6 days in summer. We conclude that the NAM cannot be described by autoregressive models for high frequencies; the spectra are more consistent with low-order chaos. We also propose that the NAM exhibits regime behaviour, however the nature of this has yet to be identified.
Resumo:
A UK field experiment compared a complete factorial combination of three backgrounds (cvs Mercia, Maris Huntsman and Maris Widgeon), three alleles at the Rht-B1 locus as Near Isogenic Lines (NILs: rht-B1a (tall), Rht-B1b (semi-dwarf), Rht-B1c (severe dwarf)) and four nitrogen (N) fertilizer application rates (0, 100, 200 and 350 kg N/ha). Linear+exponential functions were fitted to grain yield (GY) and nitrogen-use efficiency (NUE; GY/available N) responses to N rate. Averaged over N rate and background Rht-B1b conferred significantly (P<0.05) greater GY, NUE, N uptake efficiency (NUpE; N in above ground crop / available N) and N utilization efficiency (NUtEg; GY / N in above ground crop) compared with rht-B1a and Rht-B1c. However the economically optimal N rate (Nopt) for N:grain price ratios of 3.5:1 to 10:1 were also greater for Rht-B1b, and because NUE, NUpE and NUtE all declined with N rate, Rht-Blb failed to increase NUE or its components at Nopt. The adoption of semi-dwarf lines in temperate and humid regions, and the greater N rates that such adoption justifies economically, greatly increases land-use efficiency, but not necessarily, NUE.
Resumo:
This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data. It effectively widens the active–passive retrieved cross-section (RXS) of cloud properties, thereby enabling computation of radiative fluxes and radiances that can be compared with measured values in an attempt to perform radiative closure experiments that aim to assess the RXS. For this introductory study, A-train data were used to verify the scene-construction algorithm and only 1D radiative transfer calculations were performed. The construction algorithm fills off-RXS recipient pixels by computing sums of squared differences (a cost function F) between their spectral radiances and those of potential donor pixels/columns on the RXS. Of the RXS pixels with F lower than a certain value, the one with the smallest Euclidean distance to the recipient pixel is designated as the donor, and its retrieved cloud properties and other attributes such as 1D radiative heating rates are consigned to the recipient. It is shown that both the RXS itself and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery can be reconstructed extremely well using just visible and thermal infrared channels. Suitable donors usually lie within 10 km of the recipient. RXSs and their associated radiative heating profiles are reconstructed best for extensive planar clouds and less reliably for broken convective clouds. Domain-average 1D broadband radiative fluxes at the top of theatmosphere(TOA)for (21 km)2 domains constructed from MODIS, CloudSat andCloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data agree well with coincidental values derived from Clouds and the Earth’s Radiant Energy System (CERES) radiances: differences betweenmodelled and measured reflected shortwave fluxes are within±10Wm−2 for∼35% of the several hundred domains constructed for eight orbits. Correspondingly, for outgoing longwave radiation∼65% are within ±10Wm−2.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
Grasslands restoration is a key management tool contributing to the long-term maintenance of insect populations, providing functional connectivity and mitigating against extinction debt across landscapes. As knowledge of grassland insect communities is limited, the lag between the initiation of restoration and the ability of these new habitats to contribute to such processes is unclear. Using ten data sets, ranging from 3 to 14 years, we investigate the lag between restoration and the establishment of phytophagous beetle assemblages typical of species rich grasslands. We used traits and ecological characteristics to determine factors limiting beetle colonisation, and also considered how food-web structure changed during restoration. For sites where seed addition of host-plants occurred the success in replicating beetle assemblages increased over time following a negative exponential function. Extrapolation beyond the existing data set tentatively suggested that success would plateau after 20 years, representing a c. 60% increase in assemblage similarity to target grasslands. In the absence of seed addition, similarity to the target grasslands showed no increase over time. Where seed addition was used the connectance of plant-herbivore food webs decreased over time, approaching values typical of species rich grasslands after c. 7 years. This trend was, however, dependent on the inclusion of a single site containing data in excess of 6 years of restoration management. Beetles not capable of flight, those showing high degrees of host-plant specialisation and species feeding on nationally rare host plants take between 1 and 3 years longer to colonise. Successful grassland restoration is underpinned by the establishment of host-plants, although individual species traits compound the effects of poor host-plant establishment to slow colonisation. The use of pro-active grassland restoration to mitigate against future environmental change should account for lag periods in excess of 10 years if the value of these habitats is to be fully realised.
Resumo:
Small propagules like pollen or fungal spores may be dispersed by the wind over distances of hundreds or thousands of kilometres,even though the median dispersal may be only a few metres. Such long-distance dispersal is a stochastic event which may be exceptionally important in shaping a population. It has been found repeatedly in field studies that subpopulations of wind-dispersed fungal pathogens virulent on cultivars with newly introduced, effective resistance genes are dominated by one or very few genotypes. The role of propagule dispersal distributions with distinct behaviour at long distances in generating this characteristic population structure was studied by computer simulation of dispersal of clonal organisms in a heterogeneous environment with fields of unselective and selective hosts. Power-law distributions generated founder events in which new, virulent genotypes rapidly colonized fields of resistant crop varieties and subsequently dominated the pathogen population on both selective and unselective varieties, in agreement with data on rust and powdery mildew fungi. An exponential dispersal function, with extremely rare dispersal over long distances, resulted in slower colonization of resistant varieties by virulent pathogens or even no colonization if the distance between susceptible source and resistant target fields was sufficiently large. The founder events resulting from long-distance dispersal were highly stochastic and exact quantitative prediction of genotype frequencies will therefore always be difficult.
Resumo:
In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.
Resumo:
Sigma B (σB) is an alternative sigma factor that controls the transcriptional response to stress in Listeria monocytogenes and is also known to play a role in the virulence of this human pathogen. In the present study we investigated the impact of a sigB deletion on the proteome of L. monocytogenes grown in a chemically defined medium both in the presence and in the absence of osmotic stress (0.5 M NaCl). Two new phenotypes associated with the sigB deletion were identified using this medium. (i) Unexpectedly, the strain with the ΔsigB deletion was found to grow faster than the parent strain in the growth medium, but only when 0.5 M NaCl was present. This phenomenon was independent of the carbon source provided in the medium. (ii) The ΔsigB mutant was found to have unusual Gram staining properties compared to the parent, suggesting that σB contributes to the maintenance of an intact cell wall. A proteomic analysis was performed by two-dimensional gel electrophoresis, using cells growing in the exponential and stationary phases. Overall, 11 proteins were found to be differentially expressed in the wild type and the ΔsigB mutant; 10 of these proteins were expressed at lower levels in the mutant, and 1 was overexpressed in the mutant. All 11 proteins were identified by tandem mass spectrometry, and putative functions were assigned based on homology to proteins from other bacteria. Five proteins had putative functions related to carbon utilization (Lmo0539, Lmo0783, Lmo0913, Lmo1830, and Lmo2696), while three proteins were similar to proteins whose functions are unknown but that are known to be stress inducible (Lmo0796, Lmo2391, and Lmo2748). To gain further insight into the role of σB in L. monocytogenes, we deleted the genes encoding four of the proteins, lmo0796, lmo0913, lmo2391, and lmo2748. Phenotypic characterization of the mutants revealed that Lmo2748 plays a role in osmotolerance, while Lmo0796, Lmo0913, and Lmo2391 were all implicated in acid stress tolerance to various degrees. Invasion assays performed with Caco-2 cells indicated that none of the four genes was required for mammalian cell invasion. Microscopic analysis suggested that loss of Lmo2748 might contribute to the cell wall defect observed in the ΔsigB mutant. Overall, this study highlighted two new phenotypes associated with the loss of σB. It also demonstrated clear roles for σB in both osmotic and low-pH stress tolerance and identified specific components of the σB regulon that contribute to the responses observed.
Resumo:
Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
An isolate of L. monocytogenes Scott A that is tolerant to high hydrostatic pressure (HHP), named AK01, was isolated upon a single pressurization treatment of 400 MPa for 20 min and was further characterized. The survival of exponential- and stationary-phase cells of AK01 in ACES [N-(2-acetamido)-2-aminoethanesulfonic acid] buffer was at least 2 log units higher than that of the wild type over a broad range of pressures (150 to 500 MPa), while both strains showed higher HHP tolerance (piezotolerance) in the stationary than in the exponential phase of growth. In semiskim milk, exponential-phase cells of both strains showed lower reductions upon pressurization than in buffer, but again, AK01 was more piezotolerant than the wild type. The piezotolerance of AK01 was retained for at least 40 generations in rich medium, suggesting a stable phenotype. Interestingly, cells of AK01 lacked flagella, were elongated, and showed slightly lower maximum specific growth rates than the wild type at 8, 22, and 30°C. Moreover, the piezotolerant strain AK01 showed increased resistance to heat, acid, and H2O2 compared with the wild type. The difference in HHP tolerance between the piezotolerant strain and the wild-type strain could not be attributed to differences in membrane fluidity, since strain AK01 and the wild type had identical in situ lipid melting curves as determined by Fourier transform infrared spectroscopy. The demonstrated occurrence of a piezotolerant isolate of L. monocytogenes underscores the need to further investigate the mechanisms underlying HHP resistance of food-borne microorganisms, which in turn will contribute to the appropriate design of safe, accurate, and feasible HHP treatments.