86 resultados para Averaging


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disturbances of arbitrary amplitude are superposed on a basic flow which is assumed to be steady and either (a) two-dimensional, homogeneous, and incompressible (rotating or non-rotating) or (b) stably stratified and quasi-geostrophic. Flow over shallow topography is allowed in either case. The basic flow, as well as the disturbance, is assumed to be subject neither to external forcing nor to dissipative processes like viscosity. An exact, local ‘wave-activity conservation theorem’ is derived in which the density A and flux F are second-order ‘wave properties’ or ‘disturbance properties’, meaning that they are O(a2) in magnitude as disturbance amplitude a [rightward arrow] 0, and that they are evaluable correct to O(a2) from linear theory, to O(a3) from second-order theory, and so on to higher orders in a. For a disturbance in the form of a single, slowly varying, non-stationary Rossby wavetrain, $\overline{F}/\overline{A}$ reduces approximately to the Rossby-wave group velocity, where (${}^{-}$) is an appropriate averaging operator. F and A have the formal appearance of Eulerian quantities, but generally involve a multivalued function the correct branch of which requires a certain amount of Lagrangian information for its determination. It is shown that, in a certain sense, the construction of conservable, quasi-Eulerian wave properties like A is unique and that the multivaluedness is inescapable in general. The connection with the concepts of pseudoenergy (quasi-energy), pseudomomentum (quasi-momentum), and ‘Eliassen-Palm wave activity’ is noted. The relationship of this and similar conservation theorems to dynamical fundamentals and to Arnol'd's nonlinear stability theorems is discussed in the light of recent advances in Hamiltonian dynamics. These show where such conservation theorems come from and how to construct them in other cases. An elementary proof of the Hamiltonian structure of two-dimensional Eulerian vortex dynamics is put on record, with explicit attention to the boundary conditions. The connection between Arnol'd's second stability theorem and the suppression of shear and self-tuning resonant instabilities by boundary constraints is discussed, and a finite-amplitude counterpart to Rayleigh's inflection-point theorem noted

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal estimation (OE) is applied as a technique for retrieving sea surface temperature (SST) from thermal imagery obtained by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on Meteosat 9. OE requires simulation of observations as part of the retrieval process, and this is done here using numerical weather prediction fields and a fast radiative transfer model. Bias correction of the simulated brightness temperatures (BTs) is found to be a necessary step before retrieval, and is achieved by filtered averaging of simulations minus observations over a time period of 20 days and spatial scale of 2.5° in latitude and longitude. Throughout this study, BT observations are clear-sky averages over cells of size 0.5° in latitude and longitude. Results for the OE SST are compared to results using a traditional non-linear retrieval algorithm (“NLSST”), both validated against a set of 30108 night-time matches with drifting buoy observations. For the OE SST the mean difference with respect to drifter SSTs is − 0.01 K and the standard deviation is 0.47 K, compared to − 0.38 K and 0.70 K respectively for the NLSST algorithm. Perhaps more importantly, systematic biases in NLSST with respect to geographical location, atmospheric water vapour and satellite zenith angle are greatly reduced for the OE SST. However, the OE SST is calculated to have a lower sensitivity of retrieved SST to true SST variations than the NLSST. This feature would be a disadvantage for observing SST fronts and diurnal variability, and raises questions as to how best to exploit OE techniques at SEVIRI's full spatial resolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric profiles of cosmic rays and radioactivity can be obtained using adapted meteorologi- cal radiosondes, for which Geiger tubes remain widely used detectors. Simultaneous triggering of two tubes provides an indication of energetic events. As, however, only small volume detectors can be carried, the event rate is small, which, due to the rapid balloon ascent, cannot be circumvented using long averaging periods. To derive count rates at low altitudes, a microcontroller is used to de- termine the inter-event time. This yields estimates of the coincidence rate below 5 km, where the coincidence rate is too small to determine solely by event counting

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historic geomagnetic activity observations have been used to reveal centennial variations in the open solar flux and the near-Earth heliospheric conditions (the interplanetary magnetic field and the solar wind speed). The various methods are in very good agreement for the past 135 years when there were sufficient reliable magnetic observatories in operation to eliminate problems due to site-specific errors and calibration drifts. This review underlines the physical principles that allow these reconstructions to be made, as well as the details of the various algorithms employed and the results obtained. Discussion is included of: the importance of the averaging timescale; the key differences between “range” and “interdiurnal variability” geomagnetic data; the need to distinguish source field sector structure from heliospherically-imposed field structure; the importance of ensuring that regressions used are statistically robust; and uncertainty analysis. The reconstructions are exceedingly useful as they provide calibration between the in-situ spacecraft measurements from the past five decades and the millennial records of heliospheric behaviour deduced from measured abundances of cosmogenic radionuclides found in terrestrial reservoirs. Continuity of open solar flux, using sunspot number to quantify the emergence rate, is the basis of a number of models that have been very successful in reproducing the variation derived from geomagnetic activity. These models allow us to extend the reconstructions back to before the development of the magnetometer and to cover the Maunder minimum. Allied to the radionuclide data, the models are revealing much about how the Sun and heliosphere behaved outside of grand solar maxima and are providing a means of predicting how solar activity is likely to evolve now that the recent grand maximum (that had prevailed throughout the space age) has come to an end.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tests of the new Rossby wave theories that have been developed over the past decade to account for discrepancies between theoretical wave speeds and those observed by satellite altimeters have focused primarily on the surface signature of such waves. It appears, however, that the surface signature of the waves acts only as a rather weak constraint, and that information on the vertical structure of the waves is required to better discriminate between competing theories. Due to the lack of 3-D observations, this paper uses high-resolution model data to construct realistic vertical structures of Rossby waves and compares these to structures predicted by theory. The meridional velocity of a section at 24° S in the Atlantic Ocean is pre-processed using the Radon transform to select the dominant westward signal. Normalized profiles are then constructed using three complementary methods based respectively on: (1) averaging vertical profiles of velocity, (2) diagnosing the amplitude of the Radon transform of the westward propagating signal at different depths, and (3) EOF analysis. These profiles are compared to profiles calculated using four different Rossby wave theories: standard linear theory (SLT), SLT plus mean flow, SLT plus topographic effects, and theory including mean flow and topographic effects. Our results support the classical theoretical assumption that westward propagating signals have a well-defined vertical modal structure associated with a phase speed independent of depth, in contrast with the conclusions of a recent study using the same model but for different locations in the North Atlantic. The model structures are in general surface intensified, with a sign reversal at depth in some regions, notably occurring at shallower depths in the East Atlantic. SLT provides a good fit to the model structures in the top 300 m, but grossly overestimates the sign reversal at depth. The addition of mean flow slightly improves the latter issue, but is too surface intensified. SLT plus topography rectifies the overestimation of the sign reversal, but overestimates the amplitude of the structure for much of the layer above the sign reversal. Combining the effects of mean flow and topography provided the best fit for the mean model profiles, although small errors at the surface and mid-depths are carried over from the individual effects of mean flow and topography respectively. Across the section the best fitting theory varies between SLT plus topography and topography with mean flow, with, in general, SLT plus topography performing better in the east where the sign reversal is less pronounced. None of the theories could accurately reproduce the deeper sign reversals in the west. All theories performed badly at the boundaries. The generalization of this method to other latitudes, oceans, models and baroclinic modes would provide greater insight into the variability in the ocean, while better observational data would allow verification of the model findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric aerosols are now actively studied, in particular because of their radiative and climate impacts. Estimations of the direct aerosol radiative perturbation, caused by extinction of incident solar radiation, usually rely on radiative transfer codes and involve simplifying hypotheses. This paper addresses two approximations which are widely used for the sake of simplicity and limiting the computational cost of the calculations. Firstly, it is shown that using a Lambertian albedo instead of the more rigorous bidirectional reflectance distribution function (BRDF) to model the ocean surface radiative properties leads to large relative errors in the instantaneous aerosol radiative perturbation. When averaging over the day, these errors cancel out to acceptable levels of less than 3% (except in the northern hemisphere winter). The other scope of this study is to address aerosol non-sphericity effects. Comparing an experimental phase function with an equivalent Mie-calculated phase function, we found acceptable relative errors if the aerosol radiative perturbation calculated for a given optical thickness is daily averaged. However, retrieval of the optical thickness of non-spherical aerosols assuming spherical particles can lead to significant errors. This is due to significant differences between the spherical and non-spherical phase functions. Discrepancies in aerosol radiative perturbation between the spherical and non-spherical cases are sometimes reduced and sometimes enhanced if the aerosol optical thickness for the spherical case is adjusted to fit the simulated radiance of the non-spherical case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents measurements of the vertical distribution of aerosol extinction coefficient over West Africa during the Dust and Biomass-burning Aerosol Experiment (DABEX)/African Monsoon Multidisciplinary Analysis dry season Special Observing Period Zero (AMMA-SOP0). In situ aircraft measurements from the UK FAAM aircraft have been compared with two ground-based lidars (POLIS and ARM MPL) and an airborne lidar on an ultralight aircraft. In general, mineral dust was observed at low altitudes (up to 2 km), and a mixture of biomass burning aerosol and dust was observed at altitudes of 2–5 km. The study exposes difficulties associated with spatial and temporal variability when intercomparing aircraft and ground measurements. Averaging over many profiles provided a better means of assessing consistent errors and biases associated with in situ sampling instruments and retrievals of lidar ratios. Shortwave radiative transfer calculations and a 3-year simulation with the HadGEM2-A climate model show that the radiative effect of biomass burning aerosol was somewhat sensitive to the vertical distribution of aerosol. In particular, when the observed low-level dust layer was included in the model, the absorption of solar radiation by the biomass burning aerosols increased by 10%. We conclude that this absorption enhancement was caused by the dust reflecting solar radiation up into the biomass burning aerosol layer. This result illustrates that the radiative forcing of anthropogenic absorbing aerosol can be sensitive to the presence of natural aerosol species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urbanization related alterations to the surface energy balance impact urban warming (‘heat islands’), the growth of the boundary layer, and many other biophysical processes. Traditionally, in situ heat flux measures have been used to quantify such processes, but these typically represent only a small local-scale area within the heterogeneous urban environment. For this reason, remote sensing approaches are very attractive for elucidating more spatially representative information. Here we use hyperspectral imagery from a new airborne sensor, the Operative Modular Imaging Spectrometer (OMIS), along with a survey map and meteorological data, to derive the land cover information and surface parameters required to map spatial variations in turbulent sensible heat flux (QH). The results from two spatially-explicit flux retrieval methods which use contrasting approaches and, to a large degree, different input data are compared for a central urban area of Shanghai, China: (1) the Local-scale Urban Meteorological Parameterization Scheme (LUMPS) and (2) an Aerodynamic Resistance Method (ARM). Sensible heat fluxes are determined at the full 6 m spatial resolution of the OMIS sensor, and at lower resolutions via pixel aggregation and spatial averaging. At the 6 m spatial resolution, the sensible heat flux of rooftop dominated pixels exceeds that of roads, water and vegetated areas, with values peaking at ∼ 350 W m− 2, whilst the storage heat flux is greatest for road dominated pixels (peaking at around 420 W m− 2). We investigate the use of both OMIS-derived land surface temperatures made using a Temperature–Emissivity Separation (TES) approach, and land surface temperatures estimated from air temperature measures. Sensible heat flux differences from the two approaches over the entire 2 × 2 km study area are less than 30 W m− 2, suggesting that methods employing either strategy maybe practica1 when operated using low spatial resolution (e.g. 1 km) data. Due to the differing methodologies, direct comparisons between results obtained with the LUMPS and ARM methods are most sensibly made at reduced spatial scales. At 30 m spatial resolution, both approaches produce similar results, with the smallest difference being less than 15 W m− 2 in mean QH averaged over the entire study area. This is encouraging given the differing architecture and data requirements of the LUMPS and ARM methods. Furthermore, in terms of mean study QH, the results obtained by averaging the original 6 m spatial resolution LUMPS-derived QH values to 30 and 90 m spatial resolution are within ∼ 5 W m− 2 of those derived from averaging the original surface parameter maps prior to input into LUMPS, suggesting that that use of much lower spatial resolution spaceborne imagery data, for example from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is likely to be a practical solution for heat flux determination in urban areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vertical divergence of CO2 fluxes is observed over two Midwestern AmeriFlux forest sites. The differences in ensemble averaged hourly CO2 fluxes measured at two heights above canopy are relatively small (0.2–0.5 μmol m−2 s−1), but they are the major contributors to differences (76–256 g C m−2 or 41.8–50.6%) in estimated annual net ecosystem exchange (NEE) in 2001. A friction velocity criterion is used in these estimates but mean flow advection is not accounted for. This study examines the effects of coordinate rotation, averaging time period, sampling frequency and co-spectral correction on CO2 fluxes measured at a single height, and on vertical flux differences measured between two heights. Both the offset in measured vertical velocity and the downflow/upflow caused by supporting tower structures in upwind directions lead to systematic over- or under-estimates of fluxes measured at a single height. An offset of 1 cm s−1 and an upflow/downflow of 1° lead to 1% and 5.6% differences in momentum fluxes and nighttime sensible heat and CO2 fluxes, respectively, but only 0.5% and 2.8% differences in daytime sensible heat and CO2 fluxes. The sign and magnitude of both offset and upflow/downflow angle vary between sonic anemometers at two measurement heights. This introduces a systematic and large bias in vertical flux differences if these effects are not corrected in the coordinate rotation. A 1 h averaging time period is shown to be appropriate for the two sites. In the daytime, the absolute magnitudes of co-spectra decrease with height in the natural frequencies of 0.02–0.1 Hz but increase in the lower frequencies (<0.01 Hz). Thus, air motions in these two frequency ranges counteract each other in determining vertical flux differences, whose magnitude and sign vary with averaging time period. At night, co-spectral densities of CO2 are more positive at the higher levels of both sites in the frequency range of 0.03–0.4 Hz and this vertical increase is also shown at most frequencies lower than 0.03 Hz. Differences in co-spectral corrections at the two heights lead to a positive shift in vertical CO2 flux differences throughout the day at both sites. At night, the vertical CO2 flux differences between two measurement heights are 20–30% and 40–60% of co-spectral corrected CO2 fluxes measured at the lower levels of the two sites, respectively. Vertical differences of CO2 flux are relatively small in the daytime. Vertical differences in estimated mean vertical advection of CO2 between the two measurement heights generally do not improve the closure of the 1D (vertical) CO2 budget in the air layer between the two measurement heights. This may imply the significance of horizontal advection. However, a reliable assessment of mean advection contributions in annual NEE estimate at these two AmeriFlux sites is currently an unsolved problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In polar oceans, seawater freezes to form a layer of sea ice of several metres thickness that can cover up to 8% of the Earth’s surface. The modelled sea ice cover state is described by thickness and orientational distribution of interlocking, anisotropic diamond-shaped ice floes delineated by slip lines, as supported by observation. The purpose of this study is to develop a set of equations describing the mean-field sea ice stresses that result from interactions between the ice floes and the evolution of the ice floe orientation, which are simple enough to be incorporated into a climate model. The sea ice stress caused by a deformation of the ice cover is determined by employing an existing kinematic model of ice floe motion, which enables us to calculate the forces acting on the ice floes due to crushing into and sliding past each other, and then by averaging over all possible floe orientations. We describe the orientational floe distribution with a structure tensor and propose an evolution equation for this tensor that accounts for rigid body rotation of the floes, their apparent re-orientation due to new slip line formation, and change of shape of the floes due to freezing and melting. The form of the evolution equation proposed is motivated by laboratory observations of sea ice failure under controlled conditions. Finally, we present simulations of the evolution of sea ice stress and floe orientation for several imposed flow types. Although evidence to test the simulations against is lacking, the simulations seem physically reasonable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The political economy literature on agriculture emphasizes influence over political outcomes via lobbying conduits in general, political action committee contributions in particular and the pervasive view that political preferences with respect to agricultural issues are inherently geographic. In this context, ‘interdependence’ in Congressional vote behaviour manifests itself in two dimensions. One dimension is the intensity by which neighboring vote propensities influence one another and the second is the geographic extent of voter influence. We estimate these facets of dependence using data on a Congressional vote on the 2001 Farm Bill using routine Markov chain Monte Carlo procedures and Bayesian model averaging, in particular. In so doing, we develop a novel procedure to examine both the reliability and the consequences of different model representations for measuring both the ‘scale’ and the ‘scope’ of spatial (geographic) co-relations in voting behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.