82 resultados para Independent-particle shell model
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.
Resumo:
Using the virtual porous carbon model proposed by Harris et al, we study the effect of carbon surface oxidation on the pore size distribution (PSD) curve determined from simulated Ar, N(2) and CO(2) isotherms. It is assumed that surface oxidation is not destructive for the carbon skeleton, and that all pores are accessible for studied molecules (i.e., only the effect of the change of surface chemical composition is studied). The results obtained show two important things, i.e., oxidation of the carbon surface very slightly changes the absolute porosity (calculated from the geometric method of Bhattacharya and Gubbins (BG)); however, PSD curves calculated from simulated isotherms are to a greater or lesser extent affected by the presence of surface oxides. The most reliable results are obtained from Ar adsorption data. Not only is adsorption of this adsorbate practically independent from the presence of surface oxides, but, more importantly, for this molecule one can apply the slit-like model of pores as the first approach to recover the average pore diameter of a real carbon structure. For nitrogen, the effect of carbon surface chemical composition is observed due to the quadrupole moment of this molecule, and this effect shifts the PSD curves compared to Ar. The largest differences are seen for CO2, and it is clearly demonstrated that the PSD curves obtained from adsorption isotherms of this molecule contain artificial peaks and the average pore diameter is strongly influenced by the presence of electrostatic adsorbate-adsorbate as well as adsorbate-adsorbent interactions.
Resumo:
esponse to dietary fat manipulation is highly heterogeneous, yet generic population-based recommendations aimed at reducing the burden of CVD are given. The APOE epsilon genotype has been proposed to be an important determinant of this response. The present study reports on the dietary strategy employed in the SATgenɛ (SATurated fat and gene APOE) study, to assess the impact of altered fat content and composition on the blood lipid profile according to the APOE genotype. A flexible dietary exchange model was developed to implement three isoenergetic diets: a low-fat (LF) diet (target composition: 24 % of energy (%E) as fat, 8 %E SFA and 59 %E carbohydrate), a high-saturated fat (HSF) diet (38 %E fat, 18 %E SFA and 45 %E carbohydrate) and a HSF-DHA diet (HSF diet with 3 g DHA/d). Free-living participants (n 88; n 44 E3/E3 and n 44 E3/E4) followed the diets in a sequential design for 8 weeks, each using commercially available spreads, oils and snacks with specific fatty acid profiles. Dietary compositional targets were broadly met with significantly higher total fat (42·8 %E and 41·0 %E v. 25·1 %E, P ≤ 0·0011) and SFA (19·3 %E and 18·6 %E v. 8·33 %E, P ≤ 0·0011) intakes during the HSF and HSF-DHA diets compared with the LF diet, in addition to significantly higher DHA intake during the HSF-DHA diet (P ≤ 0·0011). Plasma phospholipid fatty acid analysis revealed a 2-fold increase in the proportion of DHA after consumption of the HSF-DHA diet for 8 weeks, which was independent of the APOE genotype. In summary, the dietary strategy was successfully implemented in a free-living population resulting in well-tolerated diets which broadly met the dietary targets set.
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
The ability to run General Circulation Models (GCMs) at ever-higher horizontal resolutions has meant that tropical cyclone simulations are increasingly credible. A hierarchy of atmosphere-only GCMs, based on the Hadley Centre Global Environmental Model (HadGEM1), with horizontal resolution increasing from approximately 270km to 60km (at 50N), is used to systematically investigate the impact of spatial resolution on the simulation of global tropical cyclone activity, independent of model formulation. Tropical cyclones are extracted from ensemble simulations and reanalyses of comparable resolutions using a feature-tracking algorithm. Resolution is critical for simulating storm intensity and convergence to observed storm intensities is not achieved with the model hierarchy. Resolution is less critical for simulating the annual number of tropical cyclones and their geographical distribution, which are well captured at resolutions of 135km or higher, particularly for Northern Hemisphere basins. Simulating the interannual variability of storm occurrence requires resolutions of 100km or higher; however, the level of skill is basin dependent. Higher resolution GCMs are increasingly able to capture the interannual variability of the large-scale environmental conditions that contribute to tropical cyclogenesis. Different environmental factors contribute to the interannual variability of tropical cyclones in the different basins: in the North Atlantic basin the vertical wind shear, potential intensity and low-level absolute vorticity are dominant, while in the North Pacific basins mid-level relative humidity and low-level absolute vorticity are dominant. Model resolution is crucial for a realistic simulation of tropical cyclone behaviour, and high-resolution GCMs are found to be valuable tools for investigating the global location and frequency of tropical cyclones.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
The mechanisms involved in Atlantic meridional overturning circulation (AMOC) decadal variability and predictability over the last 50 years are analysed in the IPSL–CM5A–LR model using historical and initialised simulations. The initialisation procedure only uses nudging towards sea surface temperature anomalies with a physically based restoring coefficient. When compared to two independent AMOC reconstructions, both the historical and nudged ensemble simulations exhibit skill at reproducing AMOC variations from 1977 onwards, and in particular two maxima occurring respectively around 1978 and 1997. We argue that one source of skill is related to the large Mount Agung volcanic eruption starting in 1963, which reset an internal 20-year variability cycle in the North Atlantic in the model. This cycle involves the East Greenland Current intensity, and advection of active tracers along the subpolar gyre, which leads to an AMOC maximum around 15 years after the Mount Agung eruption. The 1997 maximum occurs approximately 20 years after the former one. The nudged simulations better reproduce this second maximum than the historical simulations. This is due to the initialisation of a cooling of the convection sites in the 1980s under the effect of a persistent North Atlantic oscillation (NAO) positive phase, a feature not captured in the historical simulations. Hence we argue that the 20-year cycle excited by the 1963 Mount Agung eruption together with the NAO forcing both contributed to the 1990s AMOC maximum. These results support the existence of a 20-year cycle in the North Atlantic in the observations. Hindcasts following the CMIP5 protocol are launched from a nudged simulation every 5 years for the 1960–2005 period. They exhibit significant correlation skill score as compared to an independent reconstruction of the AMOC from 4-year lead-time average. This encouraging result is accompanied by increased correlation skills in reproducing the observed 2-m air temperature in the bordering regions of the North Atlantic as compared to non-initialized simulations. To a lesser extent, predicted precipitation tends to correlate with the nudged simulation in the tropical Atlantic. We argue that this skill is due to the initialisation and predictability of the AMOC in the present prediction system. The mechanisms evidenced here support the idea of volcanic eruptions as a pacemaker for internal variability of the AMOC. Together with the existence of a 20-year cycle in the North Atlantic they propose a novel and complementary explanation for the AMOC variations over the last 50 years.
Resumo:
Adherence of pathogenic Escherichia coli and Salmonella spp. to host cells is in part mediated by curli fimbriae which, along with other virulence determinants, are positively regulated by RpoS. Interested in the role and regulation of curli (SEF17) fimbriae of Salmonella enteritidis in poultry infection, we tested the virulence of naturally occurring S. enteritidis PT4 strains 27655R and 27655S which displayed constitutive and null expression of curli (SEF17) fimbriae, respectively, in a chick invasion assay and analysed their rpoS alleles. Both strains were shown to be equally invasive and as invasive as a wild-type phage type 4 strain and an isogenic derivative defective for the elaboration of curli. We showed that the rpoS allele of 27655S was intact even though this strain was non-curliated and we confirmed that a S. enteritidis rpoS::str(r) null mutant was unable to express curli, as anticipated. Strain 27655R, constitutively curliated, possessed a frameshift mutation at position 697 of the rpoS coding sequence which resulted in a truncated product and remained curliated even when transduced to rpoS::str(r). Additionally, rpoS mutants are known to be cold-sensitive, a phenotype confirmed for strain 27655R. Collectively, these data indicated that curliation was not a significant factor for pathogenesis of S. enteritidis in this model and that curliation of strains 27655R and 27655S was independent of RpoS. Significantly, strain 27655R possessed a defective rpoS allele and remained virulent. Here was evidence that supported the concept that different naturally occurring rpoS alleles may generate varying virulence phenotypic traits. (C) 1998 Federation of European Microbiological Societies. Published by Elsevier Science B.V. All rights reserved.
Resumo:
In this paper we report on a study conducted using the Middle Atmospheric Nitrogen TRend Assessment (MANTRA) balloon measurements of stratospheric constituents and temperature and the Canadian Middle Atmosphere Model (CMAM). Three different kinds of data are used to assess the inter-consistency of the combined dataset: single profiles of long-lived species from MANTRA 1998, sparse climatologies from the ozonesonde measurements during the four MANTRA campaigns and from HALOE satellite measurements, and the CMAM climatology. In doing so, we evaluate the ability of the model to reproduce the measured fields and to thereby test our ability to describe mid-latitude summertime stratospheric processes. The MANTRA campaigns were conducted at Vanscoy, Saskatchewan, Canada (52◦ N, 107◦ W)in late August and early September of 1998, 2000, 2002 and 2004. During late summer at mid-latitudes, the stratosphere is close to photochemical control, providing an ideal scenario for the study reported here. From this analysis we find that: (1) reducing the value for the vertical diffusion coefficient in CMAM to a more physically reasonable value results in the model better reproducing the measured profiles of long-lived species; (2) the existence of compact correlations among the constituents, as expected from independent measurements in the literature and from models, confirms the self-consistency of the MANTRA measurements; and (3) the 1998 measurements show structures in the chemical species profiles that can be associated with transport, adding to the growing evidence that the summertime stratosphere can be much more disturbed than anticipated. The mechanisms responsible for such disturbances need to be understood in order to assess the representativeness of the measurements and to isolate longterm trends.
Resumo:
Pitch-angle scattering of electrons can limit the stably trapped particle flux in the magnetosphere and precipitate energetic electrons into the ionosphere. Whistler-mode waves generated by a temperature anisotropy can mediate this pitch-angle scattering over a wide range of radial distances and latitudes, but in order to correctly predict the phase-space diffusion, it is important to characterise the whistler-mode wave distributions that result from the instability. We use previously-published observations of number density, pitch-angle anisotropy and phase space density to model the plasma in the quiet pre-noon magnetosphere (defined as periods when AE<100nT). We investigate the global propagation and growth of whistler-mode waves by studying millions of growing ray paths and demonstrate that the wave distribution at any one location is a superposition of many waves at different points along their trajectories and with different histories. We show that for observed electron plasma properties, very few raypaths undergo magnetospheric reflection, most rays grow and decay within 30 degrees of the magnetic equator. The frequency range of the wave distribution at large L can be adequately described by the solutions of the local dispersion relation, but the range of wavenormal angle is different. The wave distribution is asymmetric with respect to the wavenormal angle. The numerical results suggest that it is important to determine the variation of magnetospheric parameters as a function of latitude, as well as local time and L-shell.
Resumo:
A novel two-stage construction algorithm for linear-in-the-parameters classifier is proposed, aiming at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage to construct a sparse linear-in-the-parameters classifier. For the first stage learning of generating the prefiltered signal, a two-level algorithm is introduced to maximise the model's generalisation capability, in which an elastic net model identification algorithm using singular value decomposition is employed at the lower level while the two regularisation parameters are selected by maximising the Bayesian evidence using a particle swarm optimization algorithm. Analysis is provided to demonstrate how “Occam's razor” is embodied in this approach. The second stage of sparse classifier construction is based on an orthogonal forward regression with the D-optimality algorithm. Extensive experimental results demonstrate that the proposed approach is effective and yields competitive results for noisy data sets.
Resumo:
Tests of the new Rossby wave theories that have been developed over the past decade to account for discrepancies between theoretical wave speeds and those observed by satellite altimeters have focused primarily on the surface signature of such waves. It appears, however, that the surface signature of the waves acts only as a rather weak constraint, and that information on the vertical structure of the waves is required to better discriminate between competing theories. Due to the lack of 3-D observations, this paper uses high-resolution model data to construct realistic vertical structures of Rossby waves and compares these to structures predicted by theory. The meridional velocity of a section at 24° S in the Atlantic Ocean is pre-processed using the Radon transform to select the dominant westward signal. Normalized profiles are then constructed using three complementary methods based respectively on: (1) averaging vertical profiles of velocity, (2) diagnosing the amplitude of the Radon transform of the westward propagating signal at different depths, and (3) EOF analysis. These profiles are compared to profiles calculated using four different Rossby wave theories: standard linear theory (SLT), SLT plus mean flow, SLT plus topographic effects, and theory including mean flow and topographic effects. Our results support the classical theoretical assumption that westward propagating signals have a well-defined vertical modal structure associated with a phase speed independent of depth, in contrast with the conclusions of a recent study using the same model but for different locations in the North Atlantic. The model structures are in general surface intensified, with a sign reversal at depth in some regions, notably occurring at shallower depths in the East Atlantic. SLT provides a good fit to the model structures in the top 300 m, but grossly overestimates the sign reversal at depth. The addition of mean flow slightly improves the latter issue, but is too surface intensified. SLT plus topography rectifies the overestimation of the sign reversal, but overestimates the amplitude of the structure for much of the layer above the sign reversal. Combining the effects of mean flow and topography provided the best fit for the mean model profiles, although small errors at the surface and mid-depths are carried over from the individual effects of mean flow and topography respectively. Across the section the best fitting theory varies between SLT plus topography and topography with mean flow, with, in general, SLT plus topography performing better in the east where the sign reversal is less pronounced. None of the theories could accurately reproduce the deeper sign reversals in the west. All theories performed badly at the boundaries. The generalization of this method to other latitudes, oceans, models and baroclinic modes would provide greater insight into the variability in the ocean, while better observational data would allow verification of the model findings.
Resumo:
We derive simple analytic expressions for the continuum light curves and spectra of flaring and flickering events that occur over a wide range of astrophysical systems. We compare these results to data taken from the cataclysmic variable SS Cygni and also from SN 1987A, deriving physical parameters for the material involved. Fits to the data indicate a nearly time-independent photospheric temperature arising from the strong temperature dependence of opacity when hydrogen is partially ionized.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.