904 resultados para SOLUTION-PHASE APPROACH
Resumo:
We have calculated the concentrations of Mg in the bulk and surfaces of aragonite CaCO3 in equilibrium with aqueous solution, based on molecular dynamics simulations and grand-canonical statistical mechanics. Mg is incorporated in the surfaces, in particular in the (001) terraces, rather than in the bulk of aragonite particles. However, the total Mg content in the bulk and surface of aragonite particles was found to be too small to account for the measured Mg/Ca ratios in corals. We therefore argue that most Mg in corals is either highly metastable in the aragonite lattice, or is located outside the aragonite phase of the coral skeleton, and we discuss the implications of this finding for Mg/Ca paleothermometry.
Resumo:
Urbanization related alterations to the surface energy balance impact urban warming (‘heat islands’), the growth of the boundary layer, and many other biophysical processes. Traditionally, in situ heat flux measures have been used to quantify such processes, but these typically represent only a small local-scale area within the heterogeneous urban environment. For this reason, remote sensing approaches are very attractive for elucidating more spatially representative information. Here we use hyperspectral imagery from a new airborne sensor, the Operative Modular Imaging Spectrometer (OMIS), along with a survey map and meteorological data, to derive the land cover information and surface parameters required to map spatial variations in turbulent sensible heat flux (QH). The results from two spatially-explicit flux retrieval methods which use contrasting approaches and, to a large degree, different input data are compared for a central urban area of Shanghai, China: (1) the Local-scale Urban Meteorological Parameterization Scheme (LUMPS) and (2) an Aerodynamic Resistance Method (ARM). Sensible heat fluxes are determined at the full 6 m spatial resolution of the OMIS sensor, and at lower resolutions via pixel aggregation and spatial averaging. At the 6 m spatial resolution, the sensible heat flux of rooftop dominated pixels exceeds that of roads, water and vegetated areas, with values peaking at ∼ 350 W m− 2, whilst the storage heat flux is greatest for road dominated pixels (peaking at around 420 W m− 2). We investigate the use of both OMIS-derived land surface temperatures made using a Temperature–Emissivity Separation (TES) approach, and land surface temperatures estimated from air temperature measures. Sensible heat flux differences from the two approaches over the entire 2 × 2 km study area are less than 30 W m− 2, suggesting that methods employing either strategy maybe practica1 when operated using low spatial resolution (e.g. 1 km) data. Due to the differing methodologies, direct comparisons between results obtained with the LUMPS and ARM methods are most sensibly made at reduced spatial scales. At 30 m spatial resolution, both approaches produce similar results, with the smallest difference being less than 15 W m− 2 in mean QH averaged over the entire study area. This is encouraging given the differing architecture and data requirements of the LUMPS and ARM methods. Furthermore, in terms of mean study QH, the results obtained by averaging the original 6 m spatial resolution LUMPS-derived QH values to 30 and 90 m spatial resolution are within ∼ 5 W m− 2 of those derived from averaging the original surface parameter maps prior to input into LUMPS, suggesting that that use of much lower spatial resolution spaceborne imagery data, for example from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is likely to be a practical solution for heat flux determination in urban areas.
Resumo:
In its default configuration, the Hadley Centre climate model (GA2.0) simulates roughly one-half the observed level of Madden–Julian oscillation activity, with MJO events often lasting fewer than seven days. We use initialised, climate-resolution hindcasts to examine the sensitivity of the GA2.0 MJO to a range of changes in sub-grid parameterisations and model configurations. All 22 changes are tested for two cases during the Years of Tropical Convection. Improved skill comes only from (a) disabling vertical momentum transport by convection and (b) increasing mixing entrainment and detrainment for deep and mid-level convection. These changes are subsequently tested in a further 14 hindcast cases; only (b) consistently improves MJO skill, from 12 to 22 days. In a 20-year integration, (b) produces near-observed levels of MJO activity, but propagation through the Maritime Continent remains weak. With default settings, GA2.0 produces precipitation too readily, even in anomalously dry columns. Implementing (b) decreases the efficiency of convection, permitting instability to build during the suppressed MJO phase and producing a more favourable environment for the active phase. The distribution of daily rain rates is more consistent with satellite data; default entrainment produces 6–12 mm/day too frequently. These results are consistent with recent studies showing that greater sensitivity of convection to moisture improves the representation of the MJO.
Resumo:
A key step in many numerical schemes for time-dependent partial differential equations with moving boundaries is to rescale the problem to a fixed numerical mesh. An alternative approach is to use a moving mesh that can be adapted to focus on specific features of the model. In this paper we present and discuss two different velocity-based moving mesh methods applied to a two-phase model of avascular tumour growth formulated by Breward et al. (2002) J. Math. Biol. 45(2), 125-152. Each method has one moving node which tracks the moving boundary. The first moving mesh method uses a mesh velocity proportional to the boundary velocity. The second moving mesh method uses local conservation of volume fraction of cells (masses). Our results demonstrate that these moving mesh methods produce accurate results, offering higher resolution where desired whilst preserving the balance of fluxes and sources in the governing equations.
Resumo:
Radar refractivity retrievals have the potential to accurately capture near-surface humidity fields from the phase change of ground clutter returns. In practice, phase changes are very noisy and the required smoothing will diminish large radial phase change gradients, leading to severe underestimates of large refractivity changes (ΔN). To mitigate this, the mean refractivity change over the field (ΔNfield) must be subtracted prior to smoothing. However, both observations and simulations indicate that highly correlated returns (e.g., when single targets straddle neighboring gates) result in underestimates of ΔNfield when pulse-pair processing is used. This may contribute to reported differences of up to 30 N units between surface observations and retrievals. This effect can be avoided if ΔNfield is estimated using a linear least squares fit to azimuthally averaged phase changes. Nevertheless, subsequent smoothing of the phase changes will still tend to diminish the all-important spatial perturbations in retrieved refractivity relative to ΔNfield; an iterative estimation approach may be required. The uncertainty in the target location within the range gate leads to additional phase noise proportional to ΔN, pulse length, and radar frequency. The use of short pulse lengths is recommended, not only to reduce this noise but to increase both the maximum detectable refractivity change and the number of suitable targets. Retrievals of refractivity fields must allow for large ΔN relative to an earlier reference field. This should be achievable for short pulses at S band, but phase noise due to target motion may prevent this at C band, while at X band even the retrieval of ΔN over shorter periods may at times be impossible.
Resumo:
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow-up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods.
Resumo:
Solution calorimetry offers a reproducible technique for measuring the enthalpy of solution (ΔsolH) of a solute dissolving into a solvent. The ΔsolH of two solutes, propranolol HCl and mannitol were determined in simulated intestinal fluid (SIF) solutions designed to model the fed and fasted states within the gut, and in Hanks’ balanced salt solution (HBSS) of varying pH. The bile salt and lipid within the SIF solutions formed mixed micelles. Both solutes exhibited endothermic reactions in all solvents. The ΔsolH for propranolol HCl in the SIF solutions differed from those in the HBSS and was lower in the fed state than the fasted state SIF solution, revealing an interaction between propranolol and the micellar phase in both SIF solutions. In contrast, for mannitol the ΔsolH was constant in all solutions indicating minimal interaction between mannitol and the micellar phases of the SIF solutions. In this study, solution calorimetry proved to be a simple method for measuring the enthalpy associated with the dissolution of model drugs in complex biological media such as SIF solutions. In addition, the derived power–time curves allowed the time taken for the powdered solutes to form solutions to be estimated.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
Lipid cubic phase films are of increasingly widespread importance, both in the analysis of the cubic phases themselves by techniques including microscopy and X-ray scattering, and in their applications, especially as electrode coatings for electrochemical sensors and for templates for the electrodeposition of nanostructured metal. In this work we demonstrate that the crystallographic orientation adopted by these films is governed by minimization of interfacial energy. This is shown by the agreement between experimental data obtained using grazing-incidence small-angle X-ray scattering (GI-SAXS), and the predicted lowest energy orientation determined using a theoretical approach we have recently developed. GI-SAXS data show a high degree of orientation for films of both the double diamond phase and the gyroid phase, with the [111] and [110] directions respectively perpendicular to the planar substrate. In each case, this matches the lowest energy facet calculated for that particular phase.
Resumo:
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Resumo:
The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.
Resumo:
Mesoporous metal structures featuring a bicontinuous cubic morphology have a wide range of potential applications and novel opto-electronic properties, often orientation-dependent. We describe the production of nanostructured metal films 1–2 microns thick featuring 3D-periodic ‘single diamond’ morphology that show high out-of-plane alignment, with the (111) plane oriented parallel to the substrate. These are produced by electrodeposition of platinum through a lipid cubic phase (QII) template. Further investigation into the mechanism for the orientation revealed the surprising result that the QII template, which is tens of microns thick, is polydomain with no overall orientation. When thicker platinum films are grown, they also show increased orientational disorder. These results suggest that polydomain QII samples display a region of uniaxial orientation at the lipid/substrate interface up to approximately 2.8 ± 0.3 μm away from the solid surface. Our approach gives previously unavailable information on the arrangement of cubic phases at solid interfaces, which is important for many applications of QII phases. Most significantly, we have produced a previously unreported class of oriented nanomaterial, with potential applications including metamaterials and lithographic masks.
Resumo:
Predicting the evolution of ice sheets requires numerical models able to accurately track the migration of ice sheet continental margins or grounding lines. We introduce a physically based moving-point approach for the flow of ice sheets based on the conservation of local masses. This allows the ice sheet margins to be tracked explicitly. Our approach is also well suited to capture waiting-time behaviour efficiently. A finite-difference moving-point scheme is derived and applied in a simplified context (continental radially symmetrical shallow ice approximation). The scheme, which is inexpensive, is verified by comparing the results with steady states obtained from an analytic solution and with exact moving-margin transient solutions. In both cases the scheme is able to track the position of the ice sheet margin with high accuracy.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.