76 resultados para deduced optical model parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a dynamic causal model that can explain context-dependent changes in neural responses, in the rat barrel cortex, to an electrical whisker stimulation at different frequencies. Neural responses were measured in terms of local field potentials. These were converted into current source density (CSD) data, and the time series of the CSD sink was extracted to provide a time series response train. The model structure consists of three layers (approximating the responses from the brain stem to the thalamus and then the barrel cortex), and the latter two layers contain nonlinearly coupled modules of linear second-order dynamic systems. The interaction of these modules forms a nonlinear regulatory system that determines the temporal structure of the neural response amplitude for the thalamic and cortical layers. The model is based on the measured population dynamics of neurons rather than the dynamics of a single neuron and was evaluated against CSD data from experiments with varying stimulation frequency (1–40 Hz), random pulse trains, and awake and anesthetized animals. The model parameters obtained by optimization for different physiological conditions (anesthetized or awake) were significantly different. Following Friston, Mechelli, Turner, and Price (2000), this work is part of a formal mathematical system currently being developed (Zheng et al., 2005) that links stimulation to the blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) signal through neural activity and hemodynamic variables. The importance of the model described here is that it can be used to invert the hemodynamic measurements of changes in blood flow to estimate the underlying neural activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SHIMMER (Soil biogeocHemIcal Model for Microbial Ecosystem Response) is a new numerical modelling framework designed to simulate microbial dynamics and biogeochemical cycling during initial ecosystem development in glacier forefield soils. However, it is also transferable to other extreme ecosystem types (such as desert soils or the surface of glaciers). The rationale for model development arises from decades of empirical observations in glacier forefields, and enables a quantitative and process focussed approach. Here, we provide a detailed description of SHIMMER, test its performance in two case study forefields: the Damma Glacier (Switzerland) and the Athabasca Glacier (Canada) and analyse sensitivity to identify the most sensitive and unconstrained model parameters. Results show that the accumulation of microbial biomass is highly dependent on variation in microbial growth and death rate constants, Q10 values, the active fraction of microbial biomass and the reactivity of organic matter. The model correctly predicts the rapid accumulation of microbial biomass observed during the initial stages of succession in the forefields of both the case study systems. Primary production is responsible for the initial build-up of labile substrate that subsequently supports heterotrophic growth. However, allochthonous contributions of organic matter, and nitrogen fixation, are important in sustaining this productivity. The development and application of SHIMMER also highlights aspects of these systems that require further empirical research: quantifying nutrient budgets and biogeochemical rates, exploring seasonality and microbial growth and cell death. This will lead to increased understanding of how glacier forefields contribute to global biogeochemical cycling and climate under future ice retreat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matheron's usual variogram estimator can result in unreliable variograms when data are strongly asymmetric or skewed. Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. This paper examines the effects of underlying asymmetry on the variogram and on the accuracy of prediction, and the second one examines the effects arising from outliers. Standard geostatistical texts suggest ways of dealing with underlying asymmetry; however, this is based on informed intuition rather than detailed investigation. To determine whether the methods generally used to deal with underlying asymmetry are appropriate, the effects of different coefficients of skewness on the shape of the experimental variogram and on the model parameters were investigated. Simulated annealing was used to create normally distributed random fields of different size from variograms with different nugget:sill ratios. These data were then modified to give different degrees of asymmetry and the experimental variogram was computed in each case. The effects of standard data transformations on the form of the variogram were also investigated. Cross-validation was used to assess quantitatively the performance of the different variogram models for kriging. The results showed that the shape of the variogram was affected by the degree of asymmetry, and that the effect increased as the size of data set decreased. Transformations of the data were more effective in reducing the skewness coefficient in the larger sets of data. Cross-validation confirmed that variogram models from transformed data were more suitable for kriging than were those from the raw asymmetric data. The results of this study have implications for the 'standard best practice' in dealing with asymmetry in data for geostatistical analyses. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structure is an important physical feature of the soil that is associated with water movement, the soil atmosphere, microorganism activity and nutrient uptake. A soil without any obvious organisation of its components is known as apedal and this state can have marked effects on several soil processes. Accurate maps of topsoil and subsoil structure are desirable for a wide range of models that aim to predict erosion, solute transport, or flow of water through the soil. Also such maps would be useful to precision farmers when deciding how to apply nutrients and pesticides in a site-specific way, and to target subsoiling and soil structure stabilization procedures. Typically, soil structure is inferred from bulk density or penetrometer resistance measurements and more recently from soil resistivity and conductivity surveys. To measure the former is both time-consuming and costly, whereas observations made by the latter methods can be made automatically and swiftly using a vehicle-mounted penetrometer or resistivity and conductivity sensors. The results of each of these methods, however, are affected by other soil properties, in particular moisture content at the time of sampling, texture, and the presence of stones. Traditional methods of observing soil structure identify the type of ped and its degree of development. Methods of ranking such observations from good to poor for different soil textures have been developed. Indicator variograms can be computed for each category or rank of structure and these can be summed to give the sum of indicator variograms (SIV). Observations of the topsoil and subsoil structure were made at four field sites where the soil had developed on different parent materials. The observations were ranked by four methods and indicator and the sum of indicator variograms were computed and modelled for each method of ranking. The individual indicators were then kriged with the parameters of the appropriate indicator variogram model to map the probability of encountering soil with the structure represented by that indicator. The model parameters of the SIVs for each ranking system were used with the data to krige the soil structure classes, and the results are compared with those for the individual indicators. The relations between maps of soil structure and selected wavebands from aerial photographs are examined as basis for planning surveys of soil structure. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vibrational structure of C---H stretching states in gas-phase cyclobutene was studied using FTIR spectroscopy in the range 700–9000 cm−1. The structure was modelled using two effective vibrational Hamiltonians, one for each type of C---H bond present, consisting of local mode basis functions subject to coupling with symmetrically equivalent bonds and to Fermi resonances with suitable low frequency vibrations. Best-fit model parameters were determined using least-squares routines and the model predictions are compared to the observed band positions and intensities. Some discussion is given of the relevance of the observed couplings to intramolecular vibrational redistribution (IVR) which results in the observation of statistical behaviour in cyclobutene isomerization induced by excitation of C---H stretching overtones in the visible region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimates of the response of crops to climate change rarely quantify the uncertainty inherent in the simulation of both climate and crops. We present a crop simulation ensemble for a location in India, perturbing the response of both crop and climate under both baseline (12 720 simulations) and doubled-CO2 (171720 simulations) climates. Some simulations used parameter values representing genotypic adaptation to mean temperature change. Firstly, observed and simulated yields in the baseline climate were compared. Secondly, the response of yield to changes in mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. Thirdly, the relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes was examined. In simulations without genotypic adaptation, most of the uncertainty came from the climate model parameters. Comparison with the simulations with genotypic adaptation and with a previous study suggested that the relatively low crop parameter uncertainty derives from the observational constraints on the crop parameters used in this study. Fourthly, the simulations were used, together with an observed dataset and a simple analysis of crop cardinal temperatures and thermal time, to estimate the potential for adaptation using existing cultivars. The results suggest that the germplasm for complete adaptation of groundnut cultivation in western India to a doubled-CO2 environment may not exist. In conjunction with analyses of germplasm and local management