903 resultados para Gaussian curvature
Resumo:
We present a kinetic model for transformations between different self-assembled lipid structures. The model shows how data on the rates of phase transitions between mesophases of different geometries can be used to provide information on the mechanisms of the transformations and the transition states involved. This can be used, for example, to gain an insight into intermediate structures in cell membrane fission or fusion. In cases where the monolayer curvature changes on going from the initial to the final mesophase, we consider the phase transition to be driven primarily by the change in the relaxed curvature with pressure or temperature, which alters the relative curvature elastic energies of the two mesophase structures. Using this model, we have analyzed previously published kinetic data on the inter-conversion of inverse bicontinuous cubic phases in the 1-monoolein-30 wt% water system. The data are for a transition between QII(G) and QII(D) phases, and our analysis indicates that the transition state more closely resembles the QII(D) than the QII(G) phase. Using estimated values for the monolayer mean curvatures of the QII(G) and QII(D) phases of -0.123 nm(-1) and -0.133 nm(-1), respectively, gives values for the monolayer mean curvature of the transition state of between -0.131 nm(-1) and -0.132 nm(-1). Furthermore, we estimate that several thousand molecules undergo the phase transition cooperatively within one "cooperative unit", equivalent to 1-2 unit cells of QII(G) or 4-10 unit cells of QII(D).
Gabor wavelets and Gaussian models to separate ground and non-ground for airborne scanned LIDAR data
Resumo:
Gaussian multi-scale representation is a mathematical framework that allows to analyse images at different scales in a consistent manner, and to handle derivatives in a way deeply connected to scale. This paper uses Gaussian multi-scale representation to investigate several aspects of the derivation of atmospheric motion vectors (AMVs) from water vapour imagery. The contribution of different spatial frequencies to the tracking is studied, for a range of tracer sizes, and a number of tracer selection methods are presented and compared, using WV 6.2 images from the geostationary satellite MSG-2.
Resumo:
Radial basis function networks can be trained quickly using linear optimisation once centres and other associated parameters have been initialised. The authors propose a small adjustment to a well accepted initialisation algorithm which improves the network accuracy over a range of problems. The algorithm is described and results are presented.
Resumo:
A new incremental four-dimensional variational (4D-Var) data assimilation algorithm is introduced. The algorithm does not require the computationally expensive integrations with the nonlinear model in the outer loops. Nonlinearity is accounted for by modifying the linearization trajectory of the observation operator based on integrations with the tangent linear (TL) model. This allows us to update the linearization trajectory of the observation operator in the inner loops at negligible computational cost. As a result the distinction between inner and outer loops is no longer necessary. The key idea on which the proposed 4D-Var method is based is that by using Gaussian quadrature it is possible to get an exact correspondence between the nonlinear time evolution of perturbations and the time evolution in the TL model. It is shown that J-point Gaussian quadrature can be used to derive the exact adjoint-based observation impact equations and furthermore that it is straightforward to account for the effect of multiple outer loops in these equations if the proposed 4D-Var method is used. The method is illustrated using a three-level quasi-geostrophic model and the Lorenz (1996) model.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
The analytical model proposed by Teixeira, Miranda, and Valente is modified to calculate the gravity wave drag exerted by a stratified flow over a 2D mountain ridge. The drag is found to be more strongly affected by the vertical variation of the background velocity than for an axisymmetric mountain. In the hydrostatic approximation, the corrections to the drag due to this effect do not depend on the detailed shape of the ridge as long as this is exactly 2D. Besides the drag, all the perturbed quantities of the flow at the surface, including the pressure, may be calculated analytically.
Resumo:
An analytical model is developed to predict the surface drag exerted by internal gravity waves on an isolated axisymmetric mountain over which there is a stratified flow with a velocity profile that varies relatively slowly with height. The model is linear with respect to the perturbations induced by the mountain, and solves the Taylor–Goldstein equation with variable coefficients using a Wentzel–Kramers–Brillouin (WKB) approximation, formally valid for high Richardson numbers, Ri. The WKB solution is extended to a higher order than in previous studies, enabling a rigorous treatment of the effects of shear and curvature of the wind profile on the surface drag. In the hydrostatic approximation, closed formulas for the drag are derived for generic wind profiles, where the relative magnitude of the corrections to the leading-order drag (valid for a constant wind profile) does not depend on the detailed shape of the orography. The drag is found to vary proportionally to Ri21, decreasing as Ri decreases for a wind that varies linearly with height, and increasing as Ri decreases for a wind that rotates with height maintaining its magnitude. In these two cases the surface drag is predicted to be aligned with the surface wind. When one of the wind components varies linearly with height and the other is constant, the surface drag is misaligned with the surface wind, especially for relatively small Ri. All these results are shown to be in fairly good agreement with numerical simulations of mesoscale nonhydrostatic models, for high and even moderate values of Ri.
Resumo:
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions. Copyright © 2011 Royal Meteorological Society
Resumo:
Viral replication occurs within cells, with release (and onward infection) primarily achieved through two alternative mechanisms: lysis, in which virions emerge as the infected cell dies and bursts open; or budding, in which virions emerge gradually from a still living cell by appropriating a small part of the cell membrane. Virus budding is a poorly understood process that challenges current models of vesicle formation. Here, a plausible mechanism for arenavirus budding is presented, building on recent evidence that viral proteins embed in the inner lipid layer of the cell membrane. Experimental results confirm that viral protein is associated with increased membrane curvature, whereas a mathematical model is used to show that localized increases in curvature alone are sufficient to generate viral buds. The magnitude of the protein-induced curvature is calculated from the size of the amphipathic region hypothetically removed from the inner membrane as a result of translation, with a change in membrane stiffness estimated from observed differences in virion deformation as a result of protein depletion. Numerical results are based on experimental data and estimates for three arenaviruses, but the mechanisms described are more broadly applicable. The hypothesized mechanism is shown to be sufficient to generate spontaneous budding that matches well both qualitatively and quantitatively with experimental observations.
Resumo:
A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.