14 resultados para Prescribed mean-curvature problem

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a kinetic model for transformations between different self-assembled lipid structures. The model shows how data on the rates of phase transitions between mesophases of different geometries can be used to provide information on the mechanisms of the transformations and the transition states involved. This can be used, for example, to gain an insight into intermediate structures in cell membrane fission or fusion. In cases where the monolayer curvature changes on going from the initial to the final mesophase, we consider the phase transition to be driven primarily by the change in the relaxed curvature with pressure or temperature, which alters the relative curvature elastic energies of the two mesophase structures. Using this model, we have analyzed previously published kinetic data on the inter-conversion of inverse bicontinuous cubic phases in the 1-monoolein-30 wt% water system. The data are for a transition between QII(G) and QII(D) phases, and our analysis indicates that the transition state more closely resembles the QII(D) than the QII(G) phase. Using estimated values for the monolayer mean curvatures of the QII(G) and QII(D) phases of -0.123 nm(-1) and -0.133 nm(-1), respectively, gives values for the monolayer mean curvature of the transition state of between -0.131 nm(-1) and -0.132 nm(-1). Furthermore, we estimate that several thousand molecules undergo the phase transition cooperatively within one "cooperative unit", equivalent to 1-2 unit cells of QII(G) or 4-10 unit cells of QII(D).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In a series of papers, Killworth and Blundell have proposed to study the effects of a background mean flow and topography on Rossby wave propagation by means of a generalized eigenvalue problem formulated in terms of the vertical velocity, obtained from a linearization of the primitive equations of motion. However, it has been known for a number of years that this eigenvalue problem contains an error, which Killworth was prevented from correcting himself by his unfortunate passing and whose correction is therefore taken up in this note. Here, the author shows in the context of quasigeostrophic (QG) theory that the error can ulti- mately be traced to the fact that the eigenvalue problem for the vertical velocity is fundamentally a non- linear one (the eigenvalue appears both in the numerator and denominator), unlike that for the pressure. The reason that this nonlinear term is lacking in the Killworth and Blundell theory comes from neglecting the depth dependence of a depth-dependent term. This nonlinear term is shown on idealized examples to alter significantly the Rossby wave dispersion relation in the high-wavenumber regime but is otherwise irrelevant in the long-wave limit, in which case the eigenvalue problems for the vertical velocity and pressure are both linear. In the general dispersive case, however, one should first solve the generalized eigenvalue problem for the pressure vertical structure and, if needed, diagnose the vertical velocity vertical structure from the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of pronounced positive and negative sea surface temperature (STT) anomalies in the tropical Pacific associated with the El Niño/Southern Oscillation (ENSO) phenomenon on the atmospheric circulation in the Northern Hemisphere extratropics during the boreal winter season is investigated. This includes both the impact on the seasonal mean flow and on the intraseasonal variability on synoptic time scales. Moreover, the interaction between the transient fluctuations on these times scales and the mean circulation is examined. Both data from an ensemble of five simulations with the ECHAM3 atmospheric general circulation model at a horizontal resolution of T42 each covering the period from 1979 through 1992 and operational analyses from ECMWF for the corresponding period are examined. In each of the simulations observed SSTs for the period of investigation are given as lower boundary forcing, but different atmospheric initial conditions are prescribed. The simulations with ECHAM3 reveal a distinct impact of the pronounced SST-anomalies in the tropical Pacific on the atmospheric circulation in the Northern Hemisphere extratropics during El Niño as well as during La Niña events. These changes in the atmospheric circulation, which are found to be highly significant in the Pacific/North American as well as in the Atlantic/European region, are consistent with the essential results obtained from the analyses. The pronounced SST-anomalies in the tropical Pacific lead to changes in the mean circulation, which are characterized by typical circulation patterns. These changes in the mean circulation are accompanied by marked variations of the activity of the transient fluctuations on synoptic time scales, that are changes in both the kinetic energy on these time scales and the atmospheric transports of momentum and heat accomplished by the short baroclinic waves. The synoptic disturbances, on the other hand, play also an important role in controlling the changes in the mean circulation associated with the ENSO phenomenon. They maintain these typical circulation patterns via barotropic, but counteract them via baroclinic processes. The hypothesis of an impact of the ENSO phenomenon in the Atlantic/European region can be supported. As the determining factor the intensification (reduction) of the Aleutian low and the simultaneous reduction (intensification) of the Icelandic low during El Niño and during La Niña events respectively, is identified. The changes in the intensity of the Aleutian low during the ENSO-events are accompanied by an alteration of the transport of momentum caused by the short baroclinic waves over the North American continent in such a way that the changes in the intensity of the Icelandic low during El Niño as well as during La Niña events are maintained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is made of the zonal-mean motions induced by a growing baroclinic wave in several contexts, under the framework of three different analysis schemes: the conventional Eulerian mean (EM), the transformed Eulerian mean (TEM), and the generalized Lagrangian mean (GLM). The effect of meridional shear in the initial jet on these induced mean motions is considered by treating the instability problem in the context of the two-layer model. The conceptual simplicity of the TEM formulation is shown to be useful in diagnosing the dynamics of instability, much as it has been found helpful in many problems of wave, mean-flow interaction. In addition, it is found that the TEM vertical velocity is a very good indicator of the GLM vertical velocity. However, the GLM meridional velocity is always convergent towards the centre of instability activity, and is not at all well represented by the nondivergent TEM meridional velocity. In comparing the results with Uryu's (1979) calculation of the GLM circulation induced by a growing Eady wave, it is found that the inclusion of meridional jet shear in the present work leads to some strikingly different effects in the GLM zonal wind acceleration. In the case of pure baroclinic instability treated by Uryu, the Eulerian and Stokes accelerations nearly cancel each other in the centre of the channel, leaving a weak Lagrangian acceleration opposed to the Eulerian one. In the more general case of mixed baroclinic-barotropic instability, however, the Eulerian and Stokes accelerations can reinforce one another, leading to a very strong Lagrangian zonal wind

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through study of observations and coupled climate simulations, it is argued that the mean position of the Inter-Tropical Convergence Zone (ITCZ) north of the equator is a consequence of a northwards heat transport across the equator by ocean circulation. Observations suggest that the hemispheric net radiative forcing of climate at the top of the atmosphere is almost perfectly symmetric about the equator, and so the total (atmosphere plus ocean) heat transport across the equator is small (order 0.2 PW northwards). Due to the Atlantic ocean’s meridional overturning circulation, however, the ocean carries significantly more heat northwards across the equator (order 0.4 PW) than does the coupled system. There are two primary consequences. First, atmospheric heat transport is southwards across the equator to compensate (0.2 PW southwards), resulting in the ITCZ being displaced north of the equator. Second, the atmosphere, and indeed the ocean, is slightly warmer (by perhaps 2 °C) in the northern hemisphere than in the southern hemisphere. This leads to the northern hemisphere emitting slightly more outgoing longwave radiation than the southern hemisphere by virtue of its relative warmth, supporting the small northward heat transport by the coupled system across the equator. To conclude, the coupled nature of the problem is illustrated through study of atmosphere–ocean–ice simulations in the idealized setting of an aquaplanet, resolving the key processes at work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulated multi-model “diversity” in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated “host-model uncertainties” are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is −4.47Wm−2 and the inter-model standard deviation is 0.55Wm−2, corresponding to a relative standard deviation of 12 %. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04Wm−2, and the standard deviation increases to 1.01W−2, corresponding to a significant relative standard deviation of 97 %. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45Wm−2 (8 %) clear-sky and 0.62Wm−2 (11 %) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the Aero- Com Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11Wm−2 in the AeroCom Direct Radiative Effect experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various studies show moral intuitions to be susceptible to framing effects. Many have argued that this susceptibility is a sign of unreliability and that this poses a methodological challenge for moral philosophy. Recently, doubt has been cast on this idea. It has been argued that extant evidence of framing effects does not show that moral intuitions have a unreliability problem. I argue that, even if the extant evidence suggests that moral intuitions are fairly stable with respect to what intuitions we have, the effect of framing on the strength of those intuitions still needs to be taken into account. I argue that this by itself poses a methodological challenge for moral philosophy.