973 resultados para Approximations
Resumo:
Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.
Resumo:
This paper presents a new algorithm based on honey-bee mating optimization (HBMO) to estimate harmonic state variables in distribution networks including distributed generators (DGs). The proposed algorithm performs estimation for both amplitude and phase of each harmonics by minimizing the error between the measured values from phasor measurement units (PMUs) and the values computed from the estimated parameters during the estimation process. Simulation results on two distribution test system are presented to demonstrate that the speed and accuracy of proposed distribution harmonic state estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as weight least square (WLS), genetic algorithm (GA) and tabu search (TS).
Resumo:
Density functional calculations of the electronic band structure for superconducting and semi-conducting metal hexaborides are compared using a consistent suite of assumptions and with emphasis on the physical implications of computed models. Spin polarization enhances mathematical accuracy of the functional approximations and adds significant physical meaning to model interpretation. For YB6 and LaB6, differences in alpha and beta projections occur near the Fermi energy. These differences are pronounced for superconducting hexaborides but do not occur for other metal hexaborides.
Resumo:
The Quantum Probability Ranking Principle (QPRP) has been recently proposed, and accounts for interdependent document relevance when ranking. However, to be instantiated, the QPRP requires a method to approximate the interference" between two documents. In this poster, we empirically evaluate a number of different methods of approximation on two TREC test collections for subtopic retrieval. It is shown that these approximations can lead to significantly better retrieval performance over the state of the art.
Resumo:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i. e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i. e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i. e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i. e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include: (1) investigating estimations and approximations of quantum interference in qPRP; (2) exploiting complex numbers for the representation of documents and queries, and; (3) applying the concepts underlying qPRP to tasks other than document ranking.
Resumo:
This brief paper provides a novel derivation of the known asymptotic values of three-dimensional (3D) added mass and damping of marine structures in waves. The derivation is based on the properties of the convolution terms in the Cummins's Equation as derived by Ogilvie. The new derivation is simple and no approximations or series expansions are made. The results follow directly from the relative degree and low-frequency asymptotic properties of the rational representation of the convolution terms in the frequency domain. As an application, the extrapolation of damping values at high frequencies for the computation of retardation functions is also discussed.
Resumo:
NLS is one of the stream ciphers submitted to the eSTREAM project. We present a distinguishing attack on NLS by Crossword Puzzle (CP) attack method which is introduced in this paper. We build the distinguisher by using linear approximations of both the non-linear feedback shift register (NFSR) and the nonlinear filter function (NLF). Since the bias of the distinguisher depends on the Konst value, which is a key-dependent word, we present the graph showing how the bias of distinguisher vary with Konst. In result, we estimate the bias of the distinguisher to be around O(2^−30). Therefore, we claim that NLS is distinguishable from truly random cipher after observing O(2^60) keystream words. The experiments also show that our distinguishing attack is successful on 90.3% of Konst among 2^32 possible values. We extend the CP attack to NLSv2 which is a tweaked version of NLS. In result, we build a distinguisher which has the bias of around 2− 48. Even though this attack is below the eSTREAM criteria (2^−40), the security margin of NLSv2 seems to be too low.
Resumo:
We have developed a technique that circumvents the process of elimination of secular terms and reproduces the uniformly valid approximations, amplitude equations, and first integrals. The technique is based on a rearrangement of secular terms and their grouping into the secular series that multiplies the constants of the asymptotic expansion. We illustrate the technique by deriving amplitude equations for standard nonlinear oscillator and boundary-layer problems. © 2008 The American Physical Society.
Resumo:
The effect of ambipolar fluxes on nanoparticle charging in a typical low-pressure parallel-plate glow discharge is considered. It is shown that the equilibrium values of the nanoparticle charge in the plasma bulk and near-electrode areas are strongly affected by the ratio S ath i of the ambipolar flux and the ion thermal velocities. Under typical experimental conditions the above ratio is neither S ath i≪ 1 nor S ath i≫1, which often renders the commonly used approximations of the purely thermal or "ion wind" ion charging currents inaccurate. By using the general approximation for the ambipolar drift-affected ion flux on the nanoparticle surface, it appears possible to obtain more accurate values of the nanoparticle charge that usually deviate within 10-25 % from the values obtained without a proper accounting for the ambipolar ion fluxes. The implications of the results obtained for glow discharge modeling and nanoparticle manipulation in low-pressure plasmas are discussed.
Resumo:
NLS is a stream cipher which was submitted to the eSTREAM project. A linear distinguishing attack against NLS was presented by Cho and Pieprzyk, which was called Crossword Puzzle (CP) attack. NLSv2 is a tweak version of NLS which aims mainly at avoiding the CP attack. In this paper, a new distinguishing attack against NLSv2 is presented. The attack exploits high correlation amongst neighboring bits of the cipher. The paper first shows that the modular addition preserves pairwise correlations as demonstrated by existence of linear approximations with large biases. Next, it shows how to combine these results with the existence of high correlation between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around 2^−37. Consequently, we claim that NLSv2 is distinguishable from a random cipher after observing around 2^74 keystream words.
Contrast transfer function correction applied to cryo-electron tomography and sub-tomogram averaging
Resumo:
Cryo-electron tomography together with averaging of sub-tomograms containing identical particles can reveal the structure of proteins or protein complexes in their native environment. The resolution of this technique is limited by the contrast transfer function (CTF) of the microscope. The CTF is not routinely corrected in cryo-electron tomography because of difficulties including CTF detection, due to the low signal to noise ratio, and CTF correction, since images are characterised by a spatially variant CTF. Here we simulate the effects of the CTF on the resolution of the final reconstruction, before and after CTF correction, and consider the effect of errors and approximations in defocus determination. We show that errors in defocus determination are well tolerated when correcting a series of tomograms collected at a range of defocus values. We apply methods for determining the CTF parameters in low signal to noise images of tilted specimens, for monitoring defocus changes using observed magnification changes, and for correcting the CTF prior to reconstruction. Using bacteriophage PRDI as a test sample, we demonstrate that this approach gives an improvement in the structure obtained by sub-tomogram averaging from cryo-electron tomograms.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
In this paper, a class of unconditionally stable difference schemes based on the Pad´e approximation is presented for the Riesz space-fractional telegraph equation. Firstly, we introduce a new variable to transform the original dfferential equation to an equivalent differential equation system. Then, we apply a second order fractional central difference scheme to discretise the Riesz space-fractional operator. Finally, we use (1, 1), (2, 2) and (3, 3) Pad´e approximations to give a fully discrete difference scheme for the resulting linear system of ordinary differential equations. Matrix analysis is used to show the unconditional stability of the proposed algorithms. Two examples with known exact solutions are chosen to assess the proposed difference schemes. Numerical results demonstrate that these schemes provide accurate and efficient methods for solving a space-fractional hyperbolic equation.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.