978 resultados para radial distribution functions


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A multiscale Molecular Dynamics/Hydrodynamics implementation of the 2D Mercedes Benz (MB or BN2D) [1] water model is developed and investigated. The concept and the governing equations of multiscale coupling together with the results of the two-way coupling implementation are reported. The sensitivity of the multiscale model for obtaining macroscopic and microscopic parameters of the system, such as macroscopic density and velocity fluctuations, radial distribution and velocity autocorrelation functions of MB particles, is evaluated. Critical issues for extending the current model to large systems are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among different classes of ionic liquids (ILs), those with cyano-based anions have been of special interest due to their low viscosity and enhanced solvation ability for a large variety of compounds. Experimental results from this work reveal that the solubility of glucose in some of these ionic liquids may be higher than in water – a well-known solvent with enhanced capacity to dissolve mono- and disaccharides. This raises questions on the ability of cyano groups to establish strong hydrogen bonds with carbohydrates and on the optimal number of cyano groups at the IL anion that maximizes the solubility of glucose. In addition to experimental solubility data, these questions are addressed in this study using a combination of density functional theory (DFT) and molecular dynamics (MD) simulations. Through the calculation of the number of hydrogen bonds, coordination numbers, energies of interaction and radial and spatial distribution functions, it was possible to explain the experimental results and to show that the ability to favorably interact with glucose is driven by the polarity of each IL anion, with the optimal anion being dicyanamide.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurements of double-helicity asymmetries in inclusive hadron production in polarized p + p collisions are sensitive to helicity-dependent parton distribution functions, in particular, to the gluon helicity distribution, Delta g. This study focuses on the extraction of the double-helicity asymmetry in eta production ((p) over right arrow + (p) over right arrow -> eta + X), the eta cross section, and the eta/pi(0) cross section ratio. The cross section and ratio measurements provide essential input for the extraction of fragmentation functions that are needed to access the helicity-dependent parton distribution functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have performed ab initio molecular dynamics simulations to generate an atomic structure model of amorphous hafnium oxide (a-HfO(2)) via a melt-and-quench scheme. This structure is analyzed via bond-angle and partial pair distribution functions. These results give a Hf-O average nearest-neighbor distance of 2.2 angstrom, which should be compared to the bulk value, which ranges from 1.96 to 2.54 angstrom. We have also investigated the neutral O vacancy and a substitutional Si impurity for various sites, as well as the amorphous phase of Hf(1-x)Si(x)O(2) for x=0.25, 0375, and 0.5.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report the first measurement of the parity-violating single-spin asymmetries for midrapidity decay positrons and electrons from W(+) and W(-) boson production in longitudinally polarized proton-proton collisions at root s = 500 GeV by the STAR experiment at RHIC. The measured asymmetries, A(L)(W+) = -0.27 +/- 0.10(stat.) +/- 0.02(syst.) +/- 0.03(norm.) and A(L)(W-) = 0.14 +/- 0.19(stat.) +/- 0.02(syst.) +/- 0.01(norm.), are consistent with theory predictions, which are large and of opposite sign. These predictions are based on polarized quark and antiquark distribution functions constrained by polarized deep-inelastic scattering measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forward-backward multiplicity correlation strengths have been measured with the STAR detector for Au + Au and p + p collisions at root s(NN) = 200 GeV. Strong short- and long-range correlations (LRC) are seen in central Au + Au collisions. The magnitude of these correlations decrease with decreasing centrality until only short-range correlations are observed in peripheral Au + Au collisions. Both the dual parton model (DPM) and the color glass condensate (CGC) predict the existence of the long-range correlations. In the DPM, the fluctuation in the number of elementary (parton) inelastic collisions produces the LRC. In the CGC, longitudinal color flux tubes generate the LRC. The data are in qualitative agreement with the predictions of the DPM and indicate the presence of multiple parton interactions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims. Given that in most cases just thermal pressure is taken into account in the hydrostatic equilibrium equation to estimate galaxy cluster mass, the main purpose of this paper is to consider the contribution of all three non-thermal components to total mass measurements. The non-thermal pressure is composed by cosmic rays, turbulence and magnetic pressures. Methods. To estimate the thermal pressure we used public XMM-Newton archival data of five Abell clusters to derive temperature and density profiles. To describe the magnetic pressure, we assume a radial distribution for the magnetic field, B(r) proportional to rho(alpha)(g). To seek generality we assume alpha within the range of 0.5 to 0.9, as indicated by observations and numerical simulations. Turbulent motions and bulk velocities add a turbulent pressure, which is considered using an estimate from numerical simulations. For this component, we assume an isotropic pressure, P(turb) = 1/3 rho(g)(sigma(2)(r) + sigma(2)(t)). We also consider the contribution of cosmic ray pressure, P(cr) proportional to r(-0.5). Thus, besides the gas (thermal) pressure, we include these three non-thermal components in the magnetohydrostatic equilibrium equation and compare the total mass estimates with the values obtained without them. Results. A consistent description for the non-thermal component could yield a variation in mass estimates that extends from 10% to similar to 30%. We verified that in the inner parts of cool core clusters the cosmic ray component is comparable to the magnetic pressure, while in non-cool core clusters the cosmic ray component is dominant. For cool core clusters the magnetic pressure is the dominant component, contributing more than 50% of the total mass variation due to non-thermal pressure components. However, for non-cool core clusters, the major influence comes from the cosmic ray pressure that accounts for more than 80% of the total mass variation due to non-thermal pressure effects. For our sample, the maximum influence of the turbulent component to the total mass variation can be almost 20%. Although all of the assumptions agree with previous works, it is important to notice that our results rely on the specific parametrization adopted in this work. We show that this analysis can be regarded as a starting point for a more detailed and refined exploration of the influence of non-thermal pressure in the intra-cluster medium (ICM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rectangular dropshafts, commonly used in sewers and storm water systems, are characterised by significant flow aeration. New detailed air-water flow measurements were conducted in a near-full-scale dropshaft at large discharges. In the shaft pool and outflow channel, the results demonstrated the complexity of different competitive air entrainment mechanisms. Bubble size measurements showed a broad range of entrained bubble sizes. Analysis of streamwise distributions of bubbles suggested further some clustering process in the bubbly flow although, in the outflow channel, bubble chords were in average smaller than in the shaft pool. A robust hydrophone was tested to measure bubble acoustic spectra and to assess its field application potential. The acoustic results characterised accurately the order of magnitude of entrained bubble sizes, but the transformation from acoustic frequencies to bubble radii did not predict correctly the probability distribution functions of bubble sizes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Survival and development time from egg to adult emergence of the diamondback moth, Plutella xylostella (L.), were determined at 19 constant and 14 alternating temperature regimes from 4 to 40degreesC. Plutella xylostella developed successfully front egg to adult emergence at constant temperatures from 8 to 32degreesC. At temperatures from 4 to 6degreesC or from 34 to 40degreesC, partial or complete development of individual stages or instars was possible, with third and fourth instars having the widest temperature limits. The insect developed successfully from egg to adult emergence under alternating regimes including temperatures as low as 4degreesC or as high as 38degreesC. The degree-day model, the logistic equation, and the Wang model were used to describe the relationships between temperature and development rate at both constant and alternating temperatures. The degree-day model described the relationships well from 10 to 30degreesC. The logistic equation and the Wang model fit the data well at temperatures 32degreesC. Under alternating regimes, all three models gave good simulations of development in the mid-temperature range, but only the logistic equation gave close simulations in the low temperature range, and none gave close or consistent simulations in the high temperature range. The distribution of development time was described satisfactorily by a Weibull function. These rate and time distribution functions provide tools for simulating population development of P. xylostella over a wide range of temperature conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho objetiva avaliar o desempenho do MECID (Método dos Elementos de Contorno com Interpolação Direta) para resolver o termo integral referente à inércia na Equação de Helmholtz e, deste modo, permitir a modelagem do Problema de Autovalor assim como calcular as frequências naturais, comparando-o com os resultados obtidos pelo MEF (Método dos Elementos Finitos), gerado pela Formulação Clássica de Galerkin. Em primeira instância, serão abordados alguns problemas governados pela equação de Poisson, possibilitando iniciar a comparação de desempenho entre os métodos numéricos aqui abordados. Os problemas resolvidos se aplicam em diferentes e importantes áreas da engenharia, como na transmissão de calor, no eletromagnetismo e em problemas elásticos particulares. Em termos numéricos, sabe-se das dificuldades existentes na aproximação precisa de distribuições mais complexas de cargas, fontes ou sorvedouros no interior do domínio para qualquer técnica de contorno. No entanto, este trabalho mostra que, apesar de tais dificuldades, o desempenho do Método dos Elementos de Contorno é superior, tanto no cálculo da variável básica, quanto na sua derivada. Para tanto, são resolvidos problemas bidimensionais referentes a membranas elásticas, esforços em barras devido ao peso próprio e problemas de determinação de frequências naturais em problemas acústicos em domínios fechados, dentre outros apresentados, utilizando malhas com diferentes graus de refinamento, além de elementos lineares com funções de bases radiais para o MECID e funções base de interpolação polinomial de grau (um) para o MEF. São geradas curvas de desempenho através do cálculo do erro médio percentual para cada malha, demonstrando a convergência e a precisão de cada método. Os resultados também são comparados com as soluções analíticas, quando disponíveis, para cada exemplo resolvido neste trabalho.