936 resultados para Short range order correlations
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Resumo:
Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.
Resumo:
A 24-member ensemble of 1-h high-resolution forecasts over the Southern United Kingdom is used to study short-range forecast error statistics. The initial conditions are found from perturbations from an ensemble transform Kalman filter. Forecasts from this system are assumed to lie within the bounds of forecast error of an operational forecast system. Although noisy, this system is capable of producing physically reasonable statistics which are analysed and compared to statistics implied from a variational assimilation system. The variances for temperature errors for instance show structures that reflect convective activity. Some variables, notably potential temperature and specific humidity perturbations, have autocorrelation functions that deviate from 3-D isotropy at the convective-scale (horizontal scales less than 10 km). Other variables, notably the velocity potential for horizontal divergence perturbations, maintain 3-D isotropy at all scales. Geostrophic and hydrostatic balances are studied by examining correlations between terms in the divergence and vertical momentum equations respectively. Both balances are found to decay as the horizontal scale decreases. It is estimated that geostrophic balance becomes less important at scales smaller than 75 km, and hydrostatic balance becomes less important at scales smaller than 35 km, although more work is required to validate these findings. The implications of these results for high-resolution data assimilation are discussed.
Resumo:
With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.
Resumo:
As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.
Resumo:
With the fast development of wireless communications, ZigBee and semiconductor devices, home automation networks have recently become very popular. Since typical consumer products deployed in home automation networks are often powered by tiny and limited batteries, one of the most challenging research issues is concerning energy reduction and the balancing of energy consumption across the network in order to prolong the home network lifetime for consumer devices. The introduction of clustering and sink mobility techniques into home automation networks have been shown to be an efficient way to improve the network performance and have received significant research attention. Taking inspiration from nature, this paper proposes an Ant Colony Optimization (ACO) based clustering algorithm specifically with mobile sink support for home automation networks. In this work, the network is divided into several clusters and cluster heads are selected within each cluster. Then, a mobile sink communicates with each cluster head to collect data directly through short range communications. The ACO algorithm has been utilized in this work in order to find the optimal mobility trajectory for the mobile sink. Extensive simulation results from this research show that the proposed algorithm significantly improves home network performance when using mobile sinks in terms of energy consumption and network lifetime as compared to other routing algorithms currently deployed for home automation networks.
Resumo:
We consider random generalizations of a quantum model of infinite range introduced by Emch and Radin. The generalizations allow a neat extension from the class l (1) of absolutely summable lattice potentials to the optimal class l (2) of square summable potentials first considered by Khanin and Sinai and generalised by van Enter and van Hemmen. The approach to equilibrium in the case of a Gaussian distribution is proved to be faster than for a Bernoulli distribution for both short-range and long-range lattice potentials. While exponential decay to equilibrium is excluded in the nonrandom l (1) case, it is proved to occur for both short and long range potentials for Gaussian distributions, and for potentials of class l (2) in the Bernoulli case. Open problems are discussed.
Resumo:
The bonding properties of cations in phosphate glasses determine many short- and medium-range structural features in the glass network, hence influencing bulk properties. In this work, Pb-Al-metaphosphate glasses (1 - x)Pb-(PO(3))(2)center dot xAI(PO(3))(3) with 0 <= - x <= 1 were analyzed to determine the effect of the substitution of Pb by Al on the glass structure in the metaphosphate composition. The glass transition temperature and density were measured as a function of the Al concentration. The vibrational and structural properties were probed by Raman spectroscopy and nuclear magnetic resonance of (31)P, (27)Al, and (207)Pb. Aluminum incorporates homogeneously in the glass creating a stiffer and less packed network. The average coordination number for Al decreases from 5.9 to 5.0 as x increases from 0.1 to 1, indicating more covalent Al-O bonds. The coordination number of Pb in these glasses is greater than 8, showing an increasing ionic behavior for compositions richer in Al. A quantitative analysis of the phosphate speciation shows definite trends in the bonding of AlO(n) groups and phosphate tetrahedra. In glasses with x < 0.48, phosphate groups share preferentially only one nonbridging O corner with an AlO(n) coordination polyhedron. For x > 0.48 more than one nonbridging O can be linked to AlO(n) polyhedra. There is no corner sharing of O between AlO(n) and PbO(n) polyhedra nor between AlO(n) themselves throughout the compositional range. The PbO(n) coordination polyhedra show considerable nonbridging O sharing, with each O participating in the coordination sphere of at least two Pb. The bonding preferences determined for Al are consistent with the behavior observed in Na-Al and Ca-Al metaphosphates, indicating this may be a general behavior for ternary phosphate glasses.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We investigate the (D) over barN interaction at low energies using a meson exchange model supplemented with a short-distance contribution from one-gluon exchange. The model is developed in close analogy to the meson-exchange KN interaction of the Julich group utilizing SU(4) symmetry constraints. The main ingredients of the interaction are provided by vector meson (rho, omega) exchange and higher-order box diagrams involving (D) over bar *N , (D) over bar Delta, and (D) over bar*Delta intermediate states. The short-range part is assumed to receive additional contributions from genuine quark-gluon processes. The predicted cross-sections for (D) over barN for excess energies up to 150MeV are of the same order of magnitude as those for KN but with average values of around 20mb, roughly a factor two larger than for the latter system. It is found that the omega-exchange plays a very important role. Its interference pattern with the rho-exchange, which is basically fixed by the assumed SU(4) symmetry, clearly determines the qualitative features of the (D) over barN interaction - very similiar to what happens also for the KN system.
Resumo:
The nuclear matter calculations with realistic nucleon-nucleon potentials present a general scaling between the nucleon-nucleus binding energy, the corresponding saturation density, and the triton binding energy. The Thomas-Efimov three-body effect implies in correlations among low-energy few-body and many-body observables. It is also well known that, by varying the short-range repulsion, keeping the two-nucleon information (deuteron and scattering) fixed, the four-nucleon and three-nucleon binding energies lie on a very narrow band known as a Tjon line. By looking for a universal scaling function connecting the proper scales of the few-body system with those of the many-body system, we suggest that the general nucleus-nucleon scaling mechanism is a manifestation of a universal few-body effect.
Resumo:
Many-body systems of composite hadrons are characterized by processes that involve the simultaneous presence of hadrons and their constituents. We briefly review several methods that have been devised to study such systems and present a novel method that is based on the ideas of mapping between physical and ideal Fock spaces. The method, known as the Fock-Tani representation, was invented years ago in the context of atomic physics problems and was recently extended to hadronic physics. Starting with the Fock-space representation of single-hadron states, a change of representation is implemented by a unitary transformation such that composites are redescribed by elementary Bose and Fermi field operators in an extended Fock space. When the unitary transformation is applied to the microscopic quark Hamiltonian, effective, Hermitian Hamiltonians with a clear physical interpretation are obtained. The use of the method in connection with the linked-cluster formalism to describe short-range correlations and quark deconfinement effects in nuclear matter is discussed. As an application of the method, an effective nucleon-nucleon interaction is derived from a constituent quark model and used to obtain the equation of state of nuclear matter in the Hartree-Fock approximation.
Resumo:
Quark-model descriptions of the nucleon-nucleon interaction contain two main ingredients, a quark-exchange mechanism for the short-range repulsion and meson exchanges for the medium- and long-range parts of the interaction. We point out the special role played by higher partial waves, and in particular the (1)F(3), as a very sensitive probe for the meson-exchange pan employed in these interaction models. In particular, we show that the presently available models fail to provide a reasonable description of higher partial waves and indicate the reasons for this shortcoming.