882 resultados para exponential sum onelliptic curve
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
The magnetic moment of the Λ hyperon is calculated using the QCD sum-rule approach of Ioffe and Smilga. It is shown that μΛ has the structure μΛ=(2/3(eu+ed+4es)(eħ/2MΛc)(1+δΛ), where δΛ is small. In deriving the sum rules special attention is paid to the strange-quark mass-dependent terms and to several additional terms not considered in earlier works. These terms are now appropriately incorporated. The sum rule is analyzed using the ratio method. Using the external-field-induced susceptibilities determined earlier, we find that the calculated value of μΛ is in agreement with experiment.
Resumo:
The Wilson coefficient corresponding to the gluon-field strength GμνGμν is evaluated for the nucleon current correlation function in the presence of a static external electromagnetic field, using a regulator mass Λ to separate the high-momentum part of the Feynman diagrams. The magnetic-moment sum rules are analyzed by two different methods and the sensitivity of the results to variations in Λ are discussed.
Resumo:
Whilst the topic of soil salinity has received a substantive research effort over the years, the accurate measurement and interpretation of salinity tolerance data remain problematic. The tolerance of four perennial grass species (non-halophytes) to sodium chloride (NaCl) dominated salinity was determined in a free-flowing sand culture system. Although the salinity tolerance of non-halophytes is often represented by the threshold salinity model (bent-stick model), none of the species in the current study displayed any observable salinity threshold. Further, the observed yield decrease was not linear as suggested by the model. On re-examination of earlier datasets, we conclude that the threshold salinity model does not adequately describe the physiological processes limiting growth of non-halophytes in saline soils. Therefore, the use of the threshold salinity model is not recommended for non-halophytes, but rather, a model which more accurately reflects the physiological response observed in these saline soils, such as an exponential regression curve.
Resumo:
A small population of tall slender conifers was discovered in 1994 in a deep rainforest canyon of the Wollemi National Park, New SouthWales, Australia. The living trees closely resembled fossils that were more than 65 million years old, and this ‘living fossil’ was recognised as a third extant genus in the Araucariaceae (Araucaria, Agathis and now Wollemia). The species was named the Wollemi pine (W. nobilis). Extensive searches uncovered very few populations, with the total number of adult trees being less than 100. Ex situ collections were quickly established in Sydney as part of the Wollemi Pine Recovery Plan. The majority of the ex situ population was later transferred to our custom-built facility in Queensland for commercial multiplication. Domestication has relied very heavily on the species’ amenability to vegetative propagation because seed collection from the natural populations is dangerous, expensive, and undesirable for conservation reasons. Early propagation success was poor, with only about 25% of cuttings producing roots. However, small increases in propagation success have a very large impact on a domestication program because plant production can be modelled on an exponential curve where each rooted cutting develops into a mother plant that, in turn, provides more rooted cuttings. An extensive research program elevated rooting percentages to greater than 80% and also provided in vitro methods for plant multiplication. These successes have enabled international release of the Wollemi pine as a new and attractive species for ornamental horticulture.
Resumo:
A new test for pathogenic Leptospira isolates, based on RAPD-PCR and high-resolution melt (HRM) analysis (which measures the melting temperature of amplicons in real time, using a fluorescent DNA-binding dye), has recently been developed. A characteristic profile of the amplicons can be used to define serovars or detect genotypes. Ten serovars, of leptospires from the species Leptospira interrogans (serovars Australis, Robinsoni, Hardjo, Pomona, Zanoni, Copenhageni and Szwajizak), L. borgpetersenii (serovar Arborea), L. kirschneri (serovar Cynopteri) and L. weilii (serovar Celledoni), were typed against 13 previously published RAPD primers, using a real-time cycler (the Corbett Life Science RotorGene 6000) and the optimised reagents from a commercial kit (Quantace SensiMix). RAPD-HRM at specific temperatures generated defining amplicon melt profiles for each of the tested serovars. These profiles were evaluated as difference-curve graphs generated using the RotorGene software package, with a cut-off of at least 8 'U' (plus or minus). The results demonstrated that RAPD-HRM can be used to measure serovar diversity and establish identity, with a high degree of stability. The characterisation of Leptospira serotypes using a DNA-based methodology is now possible. As an objective and relatively inexpensive and rapid method of serovar identification, at least for cultured isolates, RAPD-HRM assays show convincing potentia.
Resumo:
High-resolution melt-curve analysis of random amplified polymorphic DNA (RAPD-HRM) is a novel technology that has emerged as a possible method to characterise leptospires to serovar level. RAPD-HRM has recently been used to measure intra-serovar convergence between strains of the same serovar as well as inter-serovar divergence between strains of different serovars. The results indicate that intra-serovar heterogeneity and inter-serovar homogeneity may limit the application of RAPD-HRM in routine diagnostics. They also indicate that genetic attenuation of aged, high-passage-number isolates could undermine the use of RAPD-HRM or any other molecular technology. Such genetic attenuation may account for a general decrease seen in titres of rabbit hyperimmune antibodies over time. Before RAPD-HRM can be further advanced as a routine diagnostic tool, strains more representative of the wild-type serovars of a given region need to be identified. Further, RAPD-HRM analysis of reference strains indicates that the routine renewal of reference collections, with new isolates, may be needed to maintain the genetic integrity of the collections.
Resumo:
The magnetic moment μB of a baryon B with quark content (aab) is written as μB=4ea(1+δB)eħ/2cMB, where ea is the charge of the quark of flavor type a. The experimental values of δB have a simple pattern and have a natural explanation within QCD. Using the ratio method, the QCD sum rules are analyzed and the values of δB are computed. We find good agreement with data (≊10%) for the nucleons and the Σ multiplet while for the cascade the agreement is not as good. In our analysis we have incorporated additional terms in the operator-product expansion as compared to previous authors. We also clarify some points of disagreement between the previous authors. External-field-induced correlations describing the magnetic properties of the vacuum are estimated from the baryon magnetic-moment sum rules themselves as well as by independent spectral representations and the results are contrasted.
Resumo:
The paper reports a detailed determination of the coexistence curve for the binary liquid system acetonitrile+cyclohexane, which have very closely matched densities and the data points get affected by gravity only for t=(Tc−T)/ Tc[approximately-equal-to]10−6. About 100 samples were measured over the range 10−6
Resumo:
Crop models for herbaceous ornamental species typically include functions for temperature and photoperiod responses, but very few incorporate vernalization, which is a requirement of many traditional crops. This study investigated the development of floriculture crop models, which describe temperature responses, plus photoperiod or vernalization requirements, using Australian native ephemerals Brunonia australis and Calandrinia sp. A novel approach involved the use of a field crop modelling tool, DEVEL2. This optimization program estimates the parameters of selected functions within the development rate models using an iterative process that minimizes sum of squares residual between estimated and observed days for the phenological event. Parameter profiling and jack-knifing are included in DEVEL2 to remove bias from parameter estimates and introduce rigour into the parameter selection process. Development rate of B. australis from planting to first visible floral bud (VFB) was predicted using a multiplicative approach with a curvilinear function to describe temperature responses and a broken linear function to explain photoperiod responses. A similar model was used to describe the development rate of Calandrinia sp., except the photoperiod function was replaced with an exponential vernalization function, which explained a facultative cold requirement and included a coefficient for determining the vernalization ceiling temperature. Temperature was the main environmental factor influencing development rate for VFB to anthesis of both species and was predicted using a linear model. The phenology models for B. australis and Calandrinia sp. described development rate from planting to VFB and from VFB to anthesis in response to temperature and photoperiod or vernalization and may assist modelling efforts of other herbaceous ornamental plants. In addition to crop management, the vernalization function could be used to identify plant communities most at risk from predicted increases in temperature due to global warming.
Resumo:
Choy sum (Brassica rapa subsp. parachinensis) is a dark green leafy vegetable that contains high folate (vitamin B9) levels comparable to spinach. Folate is essential for the maintenance of human health and is obtained solely through dietary means. Analysis of the edible portion of choy sum by both microbiological assay and LC-MS/MS indicated that total folate activity remained significantly unchanged over 3 weeks storage at 4 degrees C. Inedible fractions consisted primarily of outer leaves, which showed signs of rotting after 14d, and a combination of rotting and yellowing after 21 d, contributing to 20% and 40% of product removal, respectively. Following deconjugation of the folate present in choy sum to monoglutamate and diglutamate derivatives, the principal forms (vitamers) of folate detected in choy sum were 5-methyltetrahydrofolate and 5-formyl tetrahydrofolate, followed by tetrahydrofolate (THF), 5,10-methenyl-THF, and 10-formyl folic acid. During storage, a significant decline in 5-formyl-THF was observed, with a slight but not significant increase in the combined 5-methyl-THF derivatives. The decline in 5-formyl-THF in relation to the other folate vitamers present may indicate that 5-formyl-THF is being utilised as a folate storage reserve, being interconverted to more metabolically active forms of folate, such as 5-methyl-THF. Although folate vitamer profile changed over the storage period, total folate activity did not significantly change. From a human nutritional perspective this is important, as while particular folate vitamers (e.g. 5-methyl-THF) are necessary for maintaining vital aspects of plant metabolism, it is less important to the human diet, as humans can absorb and interconvert multiple forms of folate. The current trial indicates that it is possible to store choy sum for up to 3 weeks at 4 degrees C without significantly affecting total folate concentration of the edible portion. Crown Copyright (C) 2012 Published by Elsevier B.V. All rights reserved.
Resumo:
Curves are a common feature of road infrastructure; however crashes on road curves are associated with increased risk of injury and fatality to vehicle occupants. Countermeasures require the identification of contributing factors. However, current approaches to identifying contributors use traditional statistical methods and have not used self-reported narrative claim to identify factors related to the driver, vehicle and environment in a systemic way. Text mining of 3434 road-curve crash claim records filed between 1 January 2003 and 31 December 2005 at a major insurer in Queensland, Australia, was undertaken to identify risk levels and contributing factors. Rough set analysis was used on insurance claim narratives to identify significant contributing factors to crashes and their associated severity. New contributing factors unique to curve crashes were identified (e.g., tree, phone, over-steer) in addition to those previously identified via traditional statistical analysis of Police and licensing authority records. Text mining is a novel methodology to improve knowledge related to risk and contributing factors to road-curve crash severity. Future road-curve crash countermeasures should more fully consider the interrelationships between environment, the road, the driver and the vehicle, and education campaigns in particular could highlight the increased risk of crash on road-curves.
Resumo:
- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.