940 resultados para MEAN-FIELD SIMULATIONS
Resumo:
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models we build to spatially extended cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
The iPlan treatment planning sys-tem uses a pencil beam algorithm, with density cor-rections, to predict the doses delivered by very small (stereotactic) radiotherapy fields. This study tests the accuracy of dose predictions made by iPlan, for small-field treatments delivered to a planar solid wa-ter phantom and to heterogeneous human tissue using the BrainLAB m3 micro-multileaf collimator.
Resumo:
Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
The safety of an in-service brick arch railway bridge is assessed through field testing and finite-element analysis. Different loading test train configurations have been used in the field testing. The response of the bridge in terms of displacements, strains, and accelerations is measured under the ambient and design train traffic loading conditions. Nonlinear fracture mechanics-based finite-element analyses are performed to assess the margin of safety. A parametric study is done to study the effects of tensile strength on the progress of cracking in the arch. Furthermore, a stability analysis to assess collapse of the arch caused by lateral movement at the springing of one of the abutments that is elastically supported is carried out. The margin of safety with respect to cracking and stability failure is computed. Conclusions are drawn with some remarks on the state of the bridge within the framework of the information available and inferred information. DOI: 10.1061/(ASCE)BE.1943-5592.0000338. (C) 2013 American Society of Civil Engineers.
Resumo:
In several chemical and space industries, small bubbles are desired for efficient interaction between the liquid and gas phases. In the present study, we show that non-uniform electric field with appropriate electrode configurations can reduce the volume of the bubbles forming at submerged needles by up to three orders of magnitude. We show that localized high electric stresses at the base of the bubbles result in slipping of the contact line on the inner surface of the needle and subsequent bubble formation occurs with contact line inside the needle. We also show that for bubble formation in the presence of highly non-uniform electric field, due to high detachment frequency, the bubbles go through multiple coalescences and thus increase the apparent volume of the detached bubbles. (C) 2013 AIP Publishing LLC.
Resumo:
The work in this paper forms part of a project on the use of large eddy simulation (LES) for broadband rotor-stator interaction noise prediction. Here we focus on LES of the flow field near a fan blade trailing edge. The first part of the paper aims to evaluate LES suitability for predicting the near-field velocity field for a blunt NACA-0012 airfoil at moderate Reynolds numbers (2× 10 5 and 4× 10 5). Preliminary computations of turbulent mean and root-mean-square velocities, as well as energy spectra at the trailing edge, are compared with those from a recent experiment.1 The second part of the paper describes preliminary progress on an LES calculation of the fan wakes on a fan rig. 2 The CFD code uses a mixed element unstructured mesh with a median dual control volume. A wall-adapting local eddy-viscosity sub-grid scale model is employed. A very small amount of numerical dissipation is added in the numerical scheme to keep the compressible solver stable. Further results for the fan turbulentmean and RMS velocity, and especially the aeroacoustics field will be presented at a later stage. Copyright © 2008 by Qinling LI, Nigel Peake & Mark Savill.
Resumo:
Bycatch, or the incidental catch of nontarget organisms during fi shing operations, is a major issue in U.S. shrimp trawl fisheries. Because bycatch is typically discarded at sea, total bycatch is usually estimated by extrapolating from an observed bycatch sample to the entire fleet with either mean-per-unit or ratio estimators. Using both field observations of commercial shrimp trawlers and computer simulations, I compared five methods for generating bycatch estimates that were used in past studies, a mean-per-unit estimator and four forms of the ratio estimator, respectively: 1) the mean fish catch per unit of effort, where unit effort was a proxy for sample size, 2) the mean of the individual fish to shrimp ratios, 3) the ratio of mean fish catch to mean shrimp catch, 4) the mean of the ratios of fish catch per time fished (a variable measure of effort), and 5) the ratio of mean fish catch per mean time fished. For field data, different methods used to estimate bycatch of Atlantic croaker, spot, and weakfish yielded extremely different results, with no discernible pattern in the estimates by method, geographic region, or species. Simulated fishing fleets were used to compare bycatch estimated by the fi ve methods with “actual” (simulated) bycatch. Simulations were conducted by using both normal and delta lognormal distributions of fish and shrimp and employed a range of values for several parameters, including mean catches of fish and shrimp, variability in the catches of fish and shrimp, variability in fishing effort, number of observations, and correlations between fish and shrimp catches. Results indicated that only the mean per unit estimators provided statistically unbiased estimates, while all other methods overestimated bycatch. The mean of the individual fish to shrimp ratios, the method used in the South Atlantic Bight before the 1990s, gave the most biased estimates. Because of the statistically significant two- and 3-way interactions among parameters, it is unlikely that estimates generated by one method can be converted or corrected to estimates made by another method: therefore bycatch estimates obtained with different methods should not be compared directly.
Resumo:
Large Eddy Simulation (LES) and a novel k -l based hybrid LES/RANS approach have been applied to simulate a conjugate heat transfer problem involving flow over a matrix of surface mounted cubes. In order to assess the capability and reliability of the newly developed k -l based hybrid LES/RANS, numerical results are compared with new LES and existing RANS results. Comparisons include mean velocity profiles, Reynolds stresses and conjugate heat transfer. As well as for hybrid LES/RANS validation purposes, the LES results are used to gain insights into the complex flow physics and heat transfer mechanisms. Numerical simulations show that the hybrid LES/RANS approach is effective. Mean and instantaneous fluid temperatures adjacent to the cube surface are found to strongly correlate with flow structure. Although the LES captures more mean velocity field complexities, broadly time averaged wake temperature fields are found similar for the LES and hybrid LES/RANS. Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
This paper deals with the experimental evaluation of a flow analysis system based on the integration between an under-resolved Navier-Stokes simulation and experimental measurements with the mechanism of feedback (referred to as Measurement-Integrated simulation), applied to the case of a planar turbulent co-flowing jet. The experiments are performed with inner-to-outer-jet velocity ratio around 2 and the Reynolds number based on the inner-jet heights about 10000. The measurement system is a high-speed PIV, which provides time-resolved data of the flow-field, on a field of view which extends to 20 jet heights downstream the jet outlet. The experimental data can thus be used both for providing the feedback data for the simulations and for validation of the MI-simulations over a wide region. The effect of reduced data-rate and spatial extent of the feedback (i.e. measurements are not available at each simulation time-step or discretization point) was investigated. At first simulations were run with full information in order to obtain an upper limit of the MI-simulations performance. The results show the potential of this methodology of reproducing first and second order statistics of the turbulent flow with good accuracy. Then, to deal with the reduced data different feedback strategies were tested. It was found that for small data-rate reduction the results are basically equivalent to the case of full-information feedback but as the feedback data-rate is reduced further the error increases and tend to be localized in regions of high turbulent activity. Moreover, it is found that the spatial distribution of the error looks qualitatively different for different feedback strategies. Feedback gain distributions calculated by optimal control theory are presented and proposed as a mean to make it possible to perform MI-simulations based on localized measurements only. So far, we have not been able to low error between measurements and simulations by using these gain distributions.
Resumo:
Chaplin, W. J.; Dumbill, A. M.; Elsworth, Y.; Isaak, G. R.; McLeod, C. P.; Miller, B. A.; New, R.; Pint?r, B., Studies of the solar mean magnetic field with the Birmingham Solar-Oscillations Network (BiSON), Monthly Notice of the Royal Astronomical Society, Volume 343, Issue 3, pp. 813-818. RAE2008
Resumo:
This work is concerned with the development of a numerical scheme capable of producing accurate simulations of sound propagation in the presence of a mean flow field. The method is based on the concept of variable decomposition, which leads to two separate sets of equations. These equations are the linearised Euler equations and the Reynolds-averaged Navier–Stokes equations. This paper concentrates on the development of numerical schemes for the linearised Euler equations that leads to a computational aeroacoustics (CAA) code. The resulting CAA code is a non-diffusive, time- and space-staggered finite volume code for the acoustic perturbation, and it is validated against analytic results for pure 1D sound propagation and 2D benchmark problems involving sound scattering from a cylindrical obstacle. Predictions are also given for the case of prescribed source sound propagation in a laminar boundary layer as an illustration of the effects of mean convection. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
Two counterpropagating cool and equally dense electron beams are modeled with particle-in-cell simulations. The electron beam filamentation instability is examined in one spatial dimension, which is an approximation for a quasiplanar filament boundary. It is confirmed that the force on the electrons imposed by the electrostatic field, which develops during the nonlinear stage of the instability, oscillates around a mean value that equals the magnetic pressure gradient force. The forces acting on the electrons due to the electrostatic and the magnetic field have a similar strength. The electrostatic field reduces the confining force close to the stable equilibrium of each filament and increases it farther away, limiting the peak density. The confining time-averaged total potential permits an overlap of current filaments with an opposite flow direction.