44 resultados para spin-dependent short-range interaction
Resumo:
An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
Resumo:
Four perfluorocarbon tracer dispersion experiments were carried out in central London, United Kingdom in 2004. These experiments were supplementary to the dispersion of air pollution and penetration into the local environment (DAPPLE) campaign and consisted of ground level releases, roof level releases and mobile releases; the latter are believed to be the first such experiments to be undertaken. A detailed description of the experiments including release, sampling, analysis and wind observations is given. The characteristics of dispersion from the fixed and mobile sources are discussed and contrasted, in particular, the decay in concentration levels away from the source location and the additional variability that results from the non-uniformity of vehicle speed. Copyright © 2009 Royal Meteorological Society
Resumo:
At the end of the 20th century, we can look back on a spectacular development of numerical weather prediction, which has, practically uninterrupted, been going on since the middle of the century. High-resolution predictions for more than a week ahead for any part of the globe are now routinely produced and anyone with an Internet connection can access many of these forecasts for anywhere in the world. Extended predictions for several seasons ahead are also being done — the latest El Niño event in 1997/1998 is an example of such a successful prediction. The great achievement is due to a number of factors including the progress in computational technology and the establishment of global observing systems, combined with a systematic research program with an overall strategy towards building comprehensive prediction systems for climate and weather. In this article, I will discuss the different evolutionary steps in this development and the way new scientific ideas have contributed to efficiently explore the computing power and in using observations from new types of observing systems. Weather prediction is not an exact science due to unavoidable errors in initial data and in the models. To quantify the reliability of a forecast is therefore essential and probably more so the longer the forecasts are. Ensemble prediction is thus a new and important concept in weather and climate prediction, which I believe will become a routine aspect of weather prediction in the future. The limit between weather and climate prediction is becoming more and more diffuse and in the final part of this article I will outline the way I think development may proceed in the future.
Resumo:
For many networks in nature, science and technology, it is possible to order the nodes so that most links are short-range, connecting near-neighbours, and relatively few long-range links, or shortcuts, are present. Given a network as a set of observed links (interactions), the task of finding an ordering of the nodes that reveals such a range-dependent structure is closely related to some sparse matrix reordering problems arising in scientific computation. The spectral, or Fiedler vector, approach for sparse matrix reordering has successfully been applied to biological data sets, revealing useful structures and subpatterns. In this work we argue that a periodic analogue of the standard reordering task is also highly relevant. Here, rather than encouraging nonzeros only to lie close to the diagonal of a suitably ordered adjacency matrix, we also allow them to inhabit the off-diagonal corners. Indeed, for the classic small-world model of Watts & Strogatz (1998, Collective dynamics of ‘small-world’ networks. Nature, 393, 440–442) this type of periodic structure is inherent. We therefore devise and test a new spectral algorithm for periodic reordering. By generalizing the range-dependent random graph class of Grindrod (2002, Range-dependent random graphs and their application to modeling large small-world proteome datasets. Phys. Rev. E, 66, 066702-1–066702-7) to the periodic case, we can also construct a computable likelihood ratio that suggests whether a given network is inherently linear or periodic. Tests on synthetic data show that the new algorithm can detect periodic structure, even in the presence of noise. Further experiments on real biological data sets then show that some networks are better regarded as periodic than linear. Hence, we find both qualitative (reordered networks plots) and quantitative (likelihood ratios) evidence of periodicity in biological networks.
Resumo:
Classical strong-stretching theory (SST) predicts that, as opposing polyelectrolyte brushes are compressed together in a salt-free theta solvent, they contract so as to maintain a finite polymer-free gap, which offers a potential explanation for the ultra-low frictional forces observed in experiments even with the application of large normal forces. However, the SST ignores chain fluctuations, which would tend to close the gap resulting in physical contact and in turn significant friction. In a preceding study, we examined the effect of fluctuations using self-consistent field theory (SCFT) and illustrated that high normal forces can still be applied before the gap is destroyed. We now look at the effect of adding salt. It is found to reduce the long-range interaction between the brushes but has little effect on the short-range part, provided the concentration does not enter the salted-brush regime. Consequently, the maximum normal force between two planar brushes at the point of contact is remarkably unaffected by salt. For the crossed-cylinder geometry commonly used in experiments, however, there is a gradual reduction because in this case the long-range part of the interaction contributes to the maximum normal force.
Resumo:
The dinuclear complex [{Ru(CN)4}2(μ-bppz)]4− shows a strongly solvent-dependent metal–metal electronic interaction which allows the mixed-valence state to be switched from class 2 to class 3 by changing solvent from water to CH2Cl2. In CH2Cl2 the separation between the successive Ru(II)/Ru(III) redox couples is 350 mVand the IVCT band (from the UV/Vis/NIR spectroelectrochemistry) is characteristic of a borderline class II/III or class III mixed valence state. In water, the redox separation is only 110 mVand the much broader IVCT transition is characteristic of a class II mixed-valence state. This is consistent with the observation that raising and lowering the energy of the d(π) orbitals in CH2Cl2 or water, respectively, will decrease or increase the energy gap to the LUMO of the bppz bridging ligand, which provides the delocalisation pathway via electron-transfer. IR spectroelectrochemistry could only be carried out successfully in CH2Cl2 and revealed class III mixed-valence behaviour on the fast IR timescale. In contrast to this, time-resolved IR spectroscopy showed that the MLCTexcited state, which is formulated as RuIII(bppz˙−)RuII and can therefore be considered as a mixed-valence Ru(II)/Ru(III) complex with an intermediate bridging radical anion ligand, is localised on the IR timescale with spectroscopically distinct Ru(II) and Ru(III) termini. This is because the necessary electron-transfer via the bppz ligand is more difficult because of the additional electron on bppz˙− which raises the orbital through which electron exchange occurs in energy. DFT calculations reproduce the electronic spectra of the complex in all three Ru(II)/Ru(II), Ru(II)/Ru(III) and Ru(III)/Ru(III) calculations in both water and CH2Cl2 well as long as an explicit allowance is made for the presence of water molecules hydrogen-bonded to the cyanides in the model used. They also reproduce the excited-state IR spectra of both [Ru(CN)4(μ-bppz)]2– and [{Ru(CN)4}2(μ-bppz)]4− very well in both solvents. The reorganization of the water solvent shell indicates a possible dynamical reason for the longer life time of the triplet state in water compared to CH2Cl2.
Resumo:
The nuclear time-dependent Hartree-Fock model formulated in three-dimensional space, based on the full standard Skyrme energy density functional complemented with the tensor force, is presented. Full self-consistency is achieved by the model. The application to the isovector giant dipole resonance is discussed in the linear limit, ranging from spherical nuclei (16O and 120Sn) to systems displaying axial or triaxial deformation (24Mg, 28Si, 178Os, 190W and 238U). Particular attention is paid to the spin-dependent terms from the central sector of the functional, recently included together with the tensor. They turn out to be capable of producing a qualitative change on the strength distribution in this channel. The effect on the deformation properties is also discussed. The quantitative effects on the linear response are small and, overall, the giant dipole energy remains unaffected. Calculations are compared to predictions from the (quasi)-particle random-phase approximation and experimental data where available, finding good agreement
Resumo:
ERA-40 is a re-analysis of meteorological observations from September 1957 to August 2002 produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) in collaboration with many institutions. The observing system changed considerably over this re-analysis period, with assimilable data provided by a succession of satellite-borne instruments from the 1970s onwards, supplemented by increasing numbers of observations from aircraft, ocean-buoys and other surface platforms, but with a declining number of radiosonde ascents since the late 1980s. The observations used in ERA-40 were accumulated from many sources. The first part of this paper describes the data acquisition and the principal changes in data type and coverage over the period. It also describes the data assimilation system used for ERA-40. This benefited from many of the changes introduced into operational forecasting since the mid-1990s, when the systems used for the 15-year ECMWF re-analysis (ERA-15) and the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis were implemented. Several of the improvements are discussed. General aspects of the production of the analyses are also summarized. A number of results indicative of the overall performance of the data assimilation system, and implicitly of the observing system, are presented and discussed. The comparison of background (short-range) forecasts and analyses with observations, the consistency of the global mass budget, the magnitude of differences between analysis and background fields and the accuracy of medium-range forecasts run from the ERA-40 analyses are illustrated. Several results demonstrate the marked improvement that was made to the observing system for the southern hemisphere in the 1970s, particularly towards the end of the decade. In contrast, the synoptic quality of the analysis for the northern hemisphere is sufficient to provide forecasts that remain skilful well into the medium range for all years. Two particular problems are also examined: excessive precipitation over tropical oceans and a too strong Brewer-Dobson circulation, both of which are pronounced in later years. Several other aspects of the quality of the re-analyses revealed by monitoring and validation studies are summarized. Expectations that the second-generation ERA-40 re-analysis would provide products that are better than those from the firstgeneration ERA-15 and NCEP/NCAR re-analyses are found to have been met in most cases. © Royal Meteorological Society, 2005. The contributions of N. A. Rayner and R. W. Saunders are Crown copyright.
Resumo:
The elucidation of spatial variation in the landscape can indicate potential wildlife habitats or breeding sites for vectors, such as ticks or mosquitoes, which cause a range of diseases. Information from remotely sensed data could aid the delineation of vegetation distribution on the ground in areas where local knowledge is limited. The data from digital images are often difficult to interpret because of pixel-to-pixel variation, that is, noise, and complex variation at more than one spatial scale. Landsat Thematic Mapper Plus (ETM+) and Satellite Pour l'Observation de La Terre (SPOT) image data were analyzed for an area close to Douna in Mali, West Africa. The variograms of the normalized difference vegetation index (NDVI) from both types of image data were nested. The parameters of the nested variogram function from the Landsat ETM+ data were used to design the sampling for a ground survey of soil and vegetation data. Variograms of the soil and vegetation data showed that their variation was anisotropic and their scales of variation were similar to those of NDVI from the SPOT data. The short- and long-range components of variation in the SPOT data were filtered out separately by factorial kriging. The map of the short-range component appears to represent the patterns of vegetation and associated shallow slopes and drainage channels of the tiger bush system. The map of the long-range component also appeared to relate to broader patterns in the tiger bush and to gentle undulations in the topography. The results suggest that the types of image data analyzed in this study could be used to identify areas with more moisture in semiarid regions that could support wildlife and also be potential vector breeding sites.
Resumo:
A partial phase diagram is constructed for diblock copolymer melts using lattice-based Monte Carlo simulations. This is done by locating the order-disorder transition (ODT) with the aid of a recently proposed order parameter and identifying the ordered phase over a wide range of copolymer compositions (0.2 <= f <= 0.8). Consistent with experiments, the disordered phase is found to exhibit direct first-order transitions to each of the ordered morphologies. This includes the spontaneous formation of a perforated-lamellar phase, which presumably forms in place of the gyroid morphology due to finite-size and/or nonequilibrium effects. Also included in our study is a detailed examination of disordered cylinder-forming (f=0.3) diblock copolymers, revealing a substantial degree of pretransitional chain stretching and short-range order that set in well before the ODT, as observed previously in analogous studies on lamellar-forming (f=0.5) molecules. (c) 2006 American Institute of Physics.
Resumo:
The ECMWF full-physics and dry singular vector (SV) packages, using a dry energy norm and a 1-day optimization time, are applied to four high impact European cyclones of recent years that were almost universally badly forecast in the short range. It is shown that these full-physics SVs are much more relevant to severe cyclonic development than those based on dry dynamics plus boundary layer alone. The crucial extra ingredient is the representation of large-scale latent heat release. The severe winter storms all have a long, nearly straight region of high baroclinicity stretching across the Atlantic towards Europe, with a tongue of very high moisture content on its equatorward flank. In each case some of the final-time top SV structures pick out the region of the actual storm. The initial structures were generally located in the mid- to low troposphere. Forecasts based on initial conditions perturbed by moist SVs with opposite signs and various amplitudes show the range of possible 1-day outcomes for reasonable magnitudes of forecast error. In each case one of the perturbation structures gave a forecast very much closer to the actual storm than the control forecast. Deductions are made about the predictability of high-impact extratropical cyclone events. Implications are drawn for the short-range forecast problem and suggestions made for one practicable way to approach short-range ensemble forecasting. Copyright © 2005 Royal Meteorological Society.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Resumo:
A novel type of tweezer molecule containing electron-rich 2-pyrenyloxy arms has been designed to exploit intramolecular hydrogen bonding in stabilising a preferred conformation for supramolecular complexation to complementary sequences in aromatic copolyimides. This tweezer-conformation is demonstrated by single-crystal X-ray analyses of the tweezer molecule itself and of its complex with an aromatic diimide model-compound. In terms of its ability to bind selectively to polyimide chains, the new tweezer molecule shows very high sensitivity to sequence effects. Thus, even low concentrations of tweezer relative to diimide units (<2.5 mol%) are sufficient to produce dramatic, sequence-related splittings of the pyromellitimide proton NMR resonances. These induced resonance-shifts arise from ring-current shielding of pyromellitimide protons by the pyrenyloxy arms of the tweezer-molecule, and the magnitude of such shielding is a function of the tweezer-binding constant for any particular monomer sequence. Recognition of both short-range and long-range sequences is observed, the latter arising from cumulative ring-current shielding of diimide protons by tweezer molecules binding at multiple adjacent sites on the copolymer chain.
Apodisation, denoising and system identification techniques for THz transients in the wavelet domain
Resumo:
This work describes the use of a quadratic programming optimization procedure for designing asymmetric apodization windows to de-noise THz transient interferograms and compares these results to those obtained when wavelet signal processing algorithms are adopted. A systems identification technique in the wavelet domain is also proposed for the estimation of the complex insertion loss function. The proposed techniques can enhance the frequency dependent dynamic range of an experiment and should be of particular interest to the THz imaging and tomography community. Future advances in THz sources and detectors are likely to increase the signal-to-noise ratio of the recorded THz transients and high quality apodization techniques will become more important, and may set the limit on the achievable accuracy of the deduced spectrum.
Resumo:
Enantio-specific interactions on intrinsically chiral or chirally modified surfaces can be identified experimentally via comparison of the adsorption geometries of similar nonchiral and chiral molecules. Information about the effects of substrate-related and in interactions on the adsorption geometry of glycine, the only natural nonchiral amino acid, is therefore important for identifying enantio-specific interactions of larger chiral amino acids. We have studied the long- and short-range adsorption geometry and bonding properties of glycine on the intrinsically chiral Cu{531} surface with low-energy electron diffraction, near-edge X-ray absorption One structure spectroscopy, X-ray photoelectron spectroscopy, and temperature-programmed desorption. For coverages between 0.15 and 0.33 ML (saturated chemisorbed layer) and temperatures between 300 and 430 K, glycine molecules adsorb in two different azimuthal orientations, which are associated with adsorption sites on the {110} and {311} microfacets of Cu{531}. Both types of adsorption sites allow a triangular footprint with surface bonds through the two oxygen atoms and the nitrogen atom. The occupation of the two adsorption sites is equal for all coverages, which can be explained by pair formation due to similar site-specific adsorption energies and the possibility of forming hydrogen bonds between molecules on adjacent {110} and {311} sites. This is not the ease for alanine and points toward higher site specificity in the case of alanine, which is eventually responsible for the enantiomeric differences observed for the alanine system.