57 resultados para Lattice Statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels is presented by introducing a boundary condition for elastic and moving boundaries. The mass conservation for the boundary condition is tested in detail. The viscous flow in elastic vessels is simulated with a pressure-radius relationship similar to that of the Pulmonary blood vessels. The numerical results for steady flow agree with the analytical prediction to very high accuracy, and the simulation results for pulsatile flow are comparable with those of the aortic flows observed experimentally. The model is expected to find many applications for studying blood flows in large distensible arteries, especially in those suffering from atherosclerosis. stenosis. aneurysm, etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multicomponent nonideal gas lattice Boltzmann model by Shan and Chen (S-C) is used to study the immiscible displacement in a sinusoidal tube. The movement of interface and the contact point (contact line in three-dimension) is studied. Due to the roughness of the boundary, the contact point shows "stick-slip" mechanics. The "stick-slip" effect decreases as the speed of the interface increases. For fluids that are nonwetting, the interface is almost perpendicular to the boundaries at most time, although its shapes at different position of the tube are rather different. When the tube becomes narrow, the interface turns a complex curves rather than remains simple menisci. The velocity is found to vary considerably between the neighbor nodes close to the contact point, consistent with the experimental observation that the velocity is multi-values on the contact line. Finally, the effect of three boundary conditions is discussed. The average speed is found different for different boundary conditions. The simple bounce-back rule makes the contact point move fastest. Both the simple bounce-back and the no-slip bounce-back rules are more sensitive to the roughness of the boundary in comparison with the half-way bounce-back rule. The simulation results suggest that the S-C model may be a promising tool in simulating the displacement behaviour of two immiscible fluids in complex geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The lattice dynamics method is used to study the stability of the chain structures formed in electrorheological (ER) fluids. The appearance of the soft modes in the phonon dispersion of the structures indicates that the chains tend to distort and aggregate into thicker columns due to the electrostatic attractive forces and thermal generated forces between them. The results show that the stability of the chains relies on their width and the separation between them. The complete chain structures are more stable than the chains with defects. The results can be used to elucidate the densification phenomenon of the chains in the structuring process of ER fluids in the quiescent state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A lattice Boltzmann model able to simulate viscous fluid systems with elastic and movable boundaries is proposed. By introducing the virtual distribution function at the boundary, the Galilean invariance is recovered for the full system. As examples of application, the how in elastic vessels is simulated with the pressure-radius relationship similar to that of the pulmonary blood vessels. The numerical results for steady how are in good agreement with the analytical prediction, while the simulation results for pulsative how agree with the experimental observation of the aortic flows qualitatively. The approach has potential application in the study of the complex fluid systems such as the suspension system as well as the arterial blood flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With a rapidly increasing fraction of electricity generation being sourced from wind, extreme wind power generation events such as prolonged periods of low (or high) generation and ramps in generation, are a growing concern for the efficient and secure operation of national power systems. As extreme events occur infrequently, long and reliable meteorological records are required to accurately estimate their characteristics. Recent publications have begun to investigate the use of global meteorological “reanalysis” data sets for power system applications, many of which focus on long-term average statistics such as monthly-mean generation. Here we demonstrate that reanalysis data can also be used to estimate the frequency of relatively short-lived extreme events (including ramping on sub-daily time scales). Verification against 328 surface observation stations across the United Kingdom suggests that near-surface wind variability over spatiotemporal scales greater than around 300 km and 6 h can be faithfully reproduced using reanalysis, with no need for costly dynamical downscaling. A case study is presented in which a state-of-the-art, 33 year reanalysis data set (MERRA, from NASA-GMAO), is used to construct an hourly time series of nationally-aggregated wind power generation in Great Britain (GB), assuming a fixed, modern distribution of wind farms. The resultant generation estimates are highly correlated with recorded data from National Grid in the recent period, both for instantaneous hourly values and for variability over time intervals greater than around 6 h. This 33 year time series is then used to quantify the frequency with which different extreme GB-wide wind power generation events occur, as well as their seasonal and inter-annual variability. Several novel insights into the nature of extreme wind power generation events are described, including (i) that the number of prolonged low or high generation events is well approximated by a Poission-like random process, and (ii) whilst in general there is large seasonal variability, the magnitude of the most extreme ramps is similar in both summer and winter. An up-to-date version of the GB case study data as well as the underlying model are freely available for download from our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A reply to the comment of S. Romano, Phys. Rev. E 2015 on our previous paper is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To improve the quantity and impact of observations used in data assimilation it is necessary to take into account the full, potentially correlated, observation error statistics. A number of methods for estimating correlated observation errors exist, but a popular method is a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. The accuracy of the results it yields is unknown as the diagnostic is sensitive to the difference between the exact background and exact observation error covariances and those that are chosen for use within the assimilation. It has often been stated in the literature that the results using this diagnostic are only valid when the background and observation error correlation length scales are well separated. Here we develop new theory relating to the diagnostic. For observations on a 1D periodic domain we are able to the show the effect of changes in the assumed error statistics used in the assimilation on the estimated observation error covariance matrix. We also provide bounds for the estimated observation error variance and eigenvalues of the estimated observation error correlation matrix. We demonstrate that it is still possible to obtain useful results from the diagnostic when the background and observation error length scales are similar. In general, our results suggest that when correlated observation errors are treated as uncorrelated in the assimilation, the diagnostic will underestimate the correlation length scale. We support our theoretical results with simple illustrative examples. These results have potential use for interpreting the derived covariances estimated using an operational system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.