961 resultados para short-range ordering


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A partial phase diagram is constructed for diblock copolymer melts using lattice-based Monte Carlo simulations. This is done by locating the order-disorder transition (ODT) with the aid of a recently proposed order parameter and identifying the ordered phase over a wide range of copolymer compositions (0.2 <= f <= 0.8). Consistent with experiments, the disordered phase is found to exhibit direct first-order transitions to each of the ordered morphologies. This includes the spontaneous formation of a perforated-lamellar phase, which presumably forms in place of the gyroid morphology due to finite-size and/or nonequilibrium effects. Also included in our study is a detailed examination of disordered cylinder-forming (f=0.3) diblock copolymers, revealing a substantial degree of pretransitional chain stretching and short-range order that set in well before the ODT, as observed previously in analogous studies on lamellar-forming (f=0.5) molecules. (c) 2006 American Institute of Physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ECMWF full-physics and dry singular vector (SV) packages, using a dry energy norm and a 1-day optimization time, are applied to four high impact European cyclones of recent years that were almost universally badly forecast in the short range. It is shown that these full-physics SVs are much more relevant to severe cyclonic development than those based on dry dynamics plus boundary layer alone. The crucial extra ingredient is the representation of large-scale latent heat release. The severe winter storms all have a long, nearly straight region of high baroclinicity stretching across the Atlantic towards Europe, with a tongue of very high moisture content on its equatorward flank. In each case some of the final-time top SV structures pick out the region of the actual storm. The initial structures were generally located in the mid- to low troposphere. Forecasts based on initial conditions perturbed by moist SVs with opposite signs and various amplitudes show the range of possible 1-day outcomes for reasonable magnitudes of forecast error. In each case one of the perturbation structures gave a forecast very much closer to the actual storm than the control forecast. Deductions are made about the predictability of high-impact extratropical cyclone events. Implications are drawn for the short-range forecast problem and suggestions made for one practicable way to approach short-range ensemble forecasting. Copyright © 2005 Royal Meteorological Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel type of tweezer molecule containing electron-rich 2-pyrenyloxy arms has been designed to exploit intramolecular hydrogen bonding in stabilising a preferred conformation for supramolecular complexation to complementary sequences in aromatic copolyimides. This tweezer-conformation is demonstrated by single-crystal X-ray analyses of the tweezer molecule itself and of its complex with an aromatic diimide model-compound. In terms of its ability to bind selectively to polyimide chains, the new tweezer molecule shows very high sensitivity to sequence effects. Thus, even low concentrations of tweezer relative to diimide units (<2.5 mol%) are sufficient to produce dramatic, sequence-related splittings of the pyromellitimide proton NMR resonances. These induced resonance-shifts arise from ring-current shielding of pyromellitimide protons by the pyrenyloxy arms of the tweezer-molecule, and the magnitude of such shielding is a function of the tweezer-binding constant for any particular monomer sequence. Recognition of both short-range and long-range sequences is observed, the latter arising from cumulative ring-current shielding of diimide protons by tweezer molecules binding at multiple adjacent sites on the copolymer chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enantio-specific interactions on intrinsically chiral or chirally modified surfaces can be identified experimentally via comparison of the adsorption geometries of similar nonchiral and chiral molecules. Information about the effects of substrate-related and in interactions on the adsorption geometry of glycine, the only natural nonchiral amino acid, is therefore important for identifying enantio-specific interactions of larger chiral amino acids. We have studied the long- and short-range adsorption geometry and bonding properties of glycine on the intrinsically chiral Cu{531} surface with low-energy electron diffraction, near-edge X-ray absorption One structure spectroscopy, X-ray photoelectron spectroscopy, and temperature-programmed desorption. For coverages between 0.15 and 0.33 ML (saturated chemisorbed layer) and temperatures between 300 and 430 K, glycine molecules adsorb in two different azimuthal orientations, which are associated with adsorption sites on the {110} and {311} microfacets of Cu{531}. Both types of adsorption sites allow a triangular footprint with surface bonds through the two oxygen atoms and the nitrogen atom. The occupation of the two adsorption sites is equal for all coverages, which can be explained by pair formation due to similar site-specific adsorption energies and the possibility of forming hydrogen bonds between molecules on adjacent {110} and {311} sites. This is not the ease for alanine and points toward higher site specificity in the case of alanine, which is eventually responsible for the enantiomeric differences observed for the alanine system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 24-member ensemble of 1-h high-resolution forecasts over the Southern United Kingdom is used to study short-range forecast error statistics. The initial conditions are found from perturbations from an ensemble transform Kalman filter. Forecasts from this system are assumed to lie within the bounds of forecast error of an operational forecast system. Although noisy, this system is capable of producing physically reasonable statistics which are analysed and compared to statistics implied from a variational assimilation system. The variances for temperature errors for instance show structures that reflect convective activity. Some variables, notably potential temperature and specific humidity perturbations, have autocorrelation functions that deviate from 3-D isotropy at the convective-scale (horizontal scales less than 10 km). Other variables, notably the velocity potential for horizontal divergence perturbations, maintain 3-D isotropy at all scales. Geostrophic and hydrostatic balances are studied by examining correlations between terms in the divergence and vertical momentum equations respectively. Both balances are found to decay as the horizontal scale decreases. It is estimated that geostrophic balance becomes less important at scales smaller than 75 km, and hydrostatic balance becomes less important at scales smaller than 35 km, although more work is required to validate these findings. The implications of these results for high-resolution data assimilation are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dispersion of a point-source release of a passive scalar in a regular array of cubical, urban-like, obstacles is investigated by means of direct numerical simulations. The simulations are conducted under conditions of neutral stability and fully rough turbulent flow, at a roughness Reynolds number of Reτ = 500. The Navier–Stokes and scalar equations are integrated assuming a constant rate release from a point source close to the ground within the array. We focus on short-range dispersion, when most of the material is still within the building canopy. Mean and fluctuating concentrations are computed for three different pressure gradient directions (0◦ , 30◦ , 45◦). The results agree well with available experimental data measured in a water channel for a flow angle of 0◦ . Profiles of mean concentration and the three-dimensional structure of the dispersion pattern are compared for the different forcing angles. A number of processes affecting the plume structure are identified and discussed, including: (i) advection or channelling of scalar down ‘streets’, (ii) lateral dispersion by turbulent fluctuations and topological dispersion induced by dividing streamlines around buildings, (iii) skewing of the plume due to flow turning with height, (iv) detrainment by turbulent dispersion or mean recirculation, (v) entrainment and release of scalar in building wakes, giving rise to ‘secondary sources’, (vi) plume meandering due to unsteady turbulent fluctuations. Finally, results on relative concentration fluctuations are presented and compared with the literature for point source dispersion over flat terrain and urban arrays. Keywords Direct numerical simulation · Dispersion modelling · Urban array

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Classical strong-stretching theory (SST) predicts that, as opposing polyelectrolyte brushes are compressed together in a salt-free theta solvent, they contract so as to maintain a finite polymer-free gap, which offers a potential explanation for the ultra-low frictional forces observed in experiments even with the application of large normal forces. However, the SST ignores chain fluctuations, which would tend to close the gap resulting in physical contact and in turn significant friction. In a preceding study, we examined the effect of fluctuations using self-consistent field theory (SCFT) and illustrated that high normal forces can still be applied before the gap is destroyed. We now look at the effect of adding salt. It is found to reduce the long-range interaction between the brushes but has little effect on the short-range part, provided the concentration does not enter the salted-brush regime. Consequently, the maximum normal force between two planar brushes at the point of contact is remarkably unaffected by salt. For the crossed-cylinder geometry commonly used in experiments, however, there is a gradual reduction because in this case the long-range part of the interaction contributes to the maximum normal force.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the feasibility of using the singular vector technique to create initial condition perturbations for short-range ensemble prediction systems (SREPS) focussing on predictability of severe local storms and in particular deep convection. For this a new final time semi-norm based on the convective available potential energy (CAPE) is introduced. We compare singular vectors using the CAPE-norm with SVs using the more common total energy (TE) norm for a 2-week summer period in 2007, which includes a case of mesoscale extreme rainfall in the south west of Finland. The CAPE singular vectors perturb the CAPE field by increasing the specific humidity and temperature of the parcel and increase the lapse rate above the parcel in the lower troposphere consistent with physical considerations. The CAPE-SVs are situated in the lower troposphere. This in contrast to TE-SVs with short optimization times which predominantly remain in the high troposphere. By examining the time evolution of the CAPE singular values we observe that the convective event in the south west of Finland is clearly associated with high CAPE singular values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last two decades substantial advances have been made in the understanding of the scientific basis of urban climates. These are reviewed here with attention to sustainability of cities, applications that use climate information, and scientific understanding in relation to measurements and modelling. Consideration is given from street (micro) scale to neighbourhood (local) to city and region (meso) scale. Those areas where improvements are needed in the next decade to ensure more sustainable cities are identified. High-priority recommendations are made in the following six strategic areas: observations, data, understanding, modelling, tools and education. These include the need for more operational urban measurement stations and networks; for an international data archive to aid translation of research findings into design tools, along with guidelines for different climate zones and land uses; to develop methods to analyse atmospheric data measured above complex urban surfaces; to improve short-range, high-resolution numerical prediction of weather, air quality and chemical dispersion through improved modelling of the biogeophysical features of the urban land surface; to improve education about urban meteorology; and to encourage communication across scientific disciplines at a range of spatial and temporal scales.