939 resultados para Non-gaussian Random Functions
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
Two regions in the 3$\prime$ domain of 16S rRNA (the RNA of the small ribosomal subunit) have been implicated in decoding of termination codons. Using segment-directed PCR random mutagenesis, I isolated 33 translational suppressor mutations in the 3$\prime$ domain of 16S rRNA. Characterization of the mutations by both genetic and biochemical methods indicated that some of the mutations are defective in UGA-specific peptide chain termination and that others may be defective in peptide chain termination at all termination codons. The studies of the mutations at an internal loop in the non-conserved region of helix 44 also indicated that this structure, in a non-conserved region of 16S rRNA, is involved in both peptide chain termination and assembly of 16S rRNA.^ With a suppressible trpA UAG nonsense mutation, a spontaneously arising translational suppressor mutation was isolated in the rrnB operon cloned into a pBR322-derived plasmid. The mutation caused suppression of UAG at two codon positions in trpA but did not suppress UAA or UGA mutations at the same trpA positions. The specificity of the rRNA suppressor mutation suggests that it may cause a defect in UAG-specific peptide chain termination. The mutation is a single nucleotide deletion (G2484$\Delta$) in helix 89 of 23S rRNA (the large RNA of the large ribosomal subunit). The result indicates a functional interaction between two regions of 23S rRNA. Furthermore, it provides suggestive in vivo evidence for the involvement of the peptidyl-transferase center of 23S rRNA in peptide chain termination. The $\Delta$2484 and A1093/$\Delta$2484 (double) mutations were also observed to alter the decoding specificity of the suppressor tRNA lysT(U70), which has a mutation in its acceptor stem. That result suggests that there is an interaction between the stem-loop region of helix 89 of 23S rRNA and the acceptor stem of tRNA during decoding and that the interaction is important for the decoding specificity of tRNA.^ Using gene manipulation procedures, I have constructed a new expression vector to express and purify the cellular protein factors required for a recently developed, realistic in vitro termination assay. The gene for each protein was cloned into the newly constructed vector in such a way that expression yielded a protein with an N-terminal affinity tag, for specific, rapid purification. The amino terminus was engineered so that, after purification, the unwanted N-terminal tag can be completely removed from the protein by thrombin cleavage, yielding a natural amino acid sequence for each protein. I have cloned the genes for EF-G and all three release factors into this new expression vector and the genes for all the other protein factors into a pCAL-n expression vector. These constructs will allow our laboratory group to quickly and inexpensively purify all the protein factors needed for the new in vitro termination assay. (Abstract shortened by UMI.) ^
Resumo:
It is widely known the anular-shaped beam divergence produced by the optical reorientation induced in nematics by a Gaussian beam. Recent works have found a new effect in colored liquid crystal (MBBA, Phase V,...) showing a similar spatial distribution. A new set of random-oscillating rings appears for light intensities over a certain threshold. The beam divergence due to that effect is greater than the molecular reorientation induced one.
Resumo:
Over four hundred years ago, Sir Walter Raleigh asked his mathematical assistant to find formulas for the number of cannonballs in regularly stacked piles. These investigations aroused the curiosity of the astronomer Johannes Kepler and led to a problem that has gone centuries without a solution: why is the familiar cannonball stack the most efficient arrangement possible? Here we discuss the solution that Hales found in 1998. Almost every part of the 282-page proof relies on long computer verifications. Random matrix theory was developed by physicists to describe the spectra of complex nuclei. In particular, the statistical fluctuations of the eigenvalues (“the energy levels”) follow certain universal laws based on symmetry types. We describe these and then discuss the remarkable appearance of these laws for zeros of the Riemann zeta function (which is the generating function for prime numbers and is the last special function from the last century that is not understood today.) Explaining this phenomenon is a central problem. These topics are distinct, so we present them separately with their own introductory remarks.
Resumo:
In this work we prensent an analysis of non-slanted reflection gratings by using exact solution of the second order differential equation derived from Maxwell equations, in terms of Mathieu functions. The results obtained by using this method will be compared to those obtained by using the well known Kogelnik's Coupled Wave Theory which predicts with great accuracy the response of the efficieny of the zero and first order for volume phase gratings, for both reflection and transmission gratings.
Resumo:
Mode of access: Internet.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel((R))
Resumo:
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel((R)) spreadsheet was implemented with the use of Solver((R)) and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Resumo:
Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.
Resumo:
This report outlines the derivation and application of a non-zero mean, polynomial-exponential covariance function based Gaussian process which forms the prior wind field model used in 'autonomous' disambiguation. It is principally used since the non-zero mean permits the computation of realistic local wind vector prior probabilities which are required when applying the scaled-likelihood trick, as the marginals of the full wind field prior. As the full prior is multi-variate normal, these marginals are very simple to compute.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.