906 resultados para Classical measurement error model
Resumo:
We have developed a model of the local field potential (LFP) based on the conservation of charge, the independence principle of ionic flows and the classical Hodgkin–Huxley (HH) type intracellular model of synaptic activity. Insights were gained through the simulation of the HH intracellular model on the nonlinear relationship between the balance of synaptic conductances and that of post-synaptic currents. The latter is dependent not only on the former, but also on the temporal lag between the excitatory and inhibitory conductances, as well as the strength of the afferent signal. The proposed LFP model provides a method for decomposing the LFP recordings near the soma of layer IV pyramidal neurons in the barrel cortex of anaesthetised rats into two highly correlated components with opposite polarity. The temporal dynamics and the proportional balance of the two components are comparable to the excitatory and inhibitory post-synaptic currents computed from the HH model. This suggests that the two components of the LFP reflect the underlying excitatory and inhibitory post-synaptic currents of the local neural population. We further used the model to decompose a sequence of evoked LFP responses under repetitive electrical stimulation (5 Hz) of the whisker pad. We found that as neural responses adapted, the excitatory and inhibitory components also adapted proportionately, while the temporal lag between the onsets of the two components increased during frequency adaptation. Our results demonstrated that the balance between neural excitation and inhibition can be investigated using extracellular recordings. Extension of the model to incorporate multiple compartments should allow more quantitative interpretations of surface Electroencephalography (EEG) recordings into components reflecting the excitatory, inhibitory and passive ionic current flows generated by local neural populations.
Resumo:
Traditionally functional magnetic resonance imaging (fMRI) has been used to map activity in the human brain by measuring increases in the Blood Oxygenation Level Dependent (BOLD) signal. Often accompanying positive BOLD fMRI signal changes are sustained negative signal changes. Previous studies investigating the neurovascular coupling mechanisms of the negative BOLD phenomenon have used concurrent 2D-optical imaging spectroscopy (2D-OIS) and electrophysiology (Boorman et al., 2010). These experiments suggested that the negative BOLD signal in response to whisker stimulation was a result of an increase in deoxy-haemoglobin and reduced multi-unit activity in the deep cortical layers. However, Boorman et al. (2010) did not measure the BOLD and haemodynamic response concurrently and so could not quantitatively compare either the spatial maps or the 2D-OIS and fMRI time series directly. Furthermore their study utilised a homogeneous tissue model in which is predominantly sensitive to haemodynamic changes in more superficial layers. Here we test whether the 2D-OIS technique is appropriate for studies of negative BOLD. We used concurrent fMRI with 2D-OIS techniques for the investigation of the haemodynamics underlying the negative BOLD at 7 Tesla. We investigated whether optical methods could be used to accurately map and measure the negative BOLD phenomenon by using 2D-OIS haemodynamic data to derive predictions from a biophysical model of BOLD signal changes. We showed that despite the deep cortical origin of the negative BOLD response, if an appropriate heterogeneous tissue model is used in the spectroscopic analysis then 2D-OIS can be used to investigate the negative BOLD phenomenon.
Resumo:
Modern neuroimaging techniques rely on neurovascular coupling to show regions of increased brain activation. However, little is known of the neurovascular coupling relationships that exist for inhibitory signals. To address this issue directly we developed a preparation to investigate the signal sources of one of these proposed inhibitory neurovascular signals, the negative blood oxygen level-dependent (BOLD) response (NBR), in rat somatosensory cortex. We found a reliable NBR measured in rat somatosensory cortex in response to unilateral electrical whisker stimulation, which was located in deeper cortical layers relative to the positive BOLD response. Separate optical measurements (two-dimensional optical imaging spectroscopy and laser Doppler flowmetry) revealed that the NBR was a result of decreased blood volume and flow and increased levels of deoxyhemoglobin. Neural activity in the NBR region, measured by multichannel electrodes, varied considerably as a function of cortical depth. There was a decrease in neuronal activity in deep cortical laminae. After cessation of whisker stimulation there was a large increase in neural activity above baseline. Both the decrease in neuronal activity and increase above baseline after stimulation cessation correlated well with the simultaneous measurement of blood flow suggesting that the NBR is related to decreases in neural activity in deep cortical layers. Interestingly, the magnitude of the neural decrease was largest in regions showing stimulus-evoked positive BOLD responses. Since a similar type of neural suppression in surround regions was associated with a negative BOLD signal, the increased levels of suppression in positive BOLD regions could importantly moderate the size of the observed BOLD response.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
Tests of the new Rossby wave theories that have been developed over the past decade to account for discrepancies between theoretical wave speeds and those observed by satellite altimeters have focused primarily on the surface signature of such waves. It appears, however, that the surface signature of the waves acts only as a rather weak constraint, and that information on the vertical structure of the waves is required to better discriminate between competing theories. Due to the lack of 3-D observations, this paper uses high-resolution model data to construct realistic vertical structures of Rossby waves and compares these to structures predicted by theory. The meridional velocity of a section at 24° S in the Atlantic Ocean is pre-processed using the Radon transform to select the dominant westward signal. Normalized profiles are then constructed using three complementary methods based respectively on: (1) averaging vertical profiles of velocity, (2) diagnosing the amplitude of the Radon transform of the westward propagating signal at different depths, and (3) EOF analysis. These profiles are compared to profiles calculated using four different Rossby wave theories: standard linear theory (SLT), SLT plus mean flow, SLT plus topographic effects, and theory including mean flow and topographic effects. Our results support the classical theoretical assumption that westward propagating signals have a well-defined vertical modal structure associated with a phase speed independent of depth, in contrast with the conclusions of a recent study using the same model but for different locations in the North Atlantic. The model structures are in general surface intensified, with a sign reversal at depth in some regions, notably occurring at shallower depths in the East Atlantic. SLT provides a good fit to the model structures in the top 300 m, but grossly overestimates the sign reversal at depth. The addition of mean flow slightly improves the latter issue, but is too surface intensified. SLT plus topography rectifies the overestimation of the sign reversal, but overestimates the amplitude of the structure for much of the layer above the sign reversal. Combining the effects of mean flow and topography provided the best fit for the mean model profiles, although small errors at the surface and mid-depths are carried over from the individual effects of mean flow and topography respectively. Across the section the best fitting theory varies between SLT plus topography and topography with mean flow, with, in general, SLT plus topography performing better in the east where the sign reversal is less pronounced. None of the theories could accurately reproduce the deeper sign reversals in the west. All theories performed badly at the boundaries. The generalization of this method to other latitudes, oceans, models and baroclinic modes would provide greater insight into the variability in the ocean, while better observational data would allow verification of the model findings.
Resumo:
A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.
Resumo:
Interest towards Enterprise Architecture (EA) has been increasing during the last few years. EA has been found to be a crucial aspect of business survival, and thus the importance of EA implementation success is also crucial. Current literature does not have a tool to be used to measure the success of EA implementation. In this paper, a tentative model for measuring success is presented and empirically validated in EA context. Results show that the success of EA implementation can be measured indirectly by measuring the achievement of the objectives set for the implementation. Results also imply that achieving individual's objectives do not necessarily mean that organisation's objectives are achieved. The presented Success Measurement Model can be used as basis for developing measurement metrics.
Resumo:
We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.
Resumo:
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Resumo:
Current feed evaluation systems for ruminants are too imprecise to describe diets in terms of their acidosis risk. The dynamic mechanistic model described herein arises from the integration of a lactic acid (La) metabolism module into an extant model of whole-rumen function. The model was evaluated using published data from cows and sheep fed a range of diets or infused with various doses of La. The model performed well in simulating peak rumen La concentrations (coefficient of determination = 0.96; root mean square prediction error = 16.96% of observed mean), although frequency of sampling for the published data prevented a comprehensive comparison of prediction of time to peak La accumulation. The model showed a tendency for increased La accumulation following feeding of diets rich in nonstructural carbohydrates, although less-soluble starch sources such as corn tended to limit rumen La concentration. Simulated La absorption from the rumen remained low throughout the feeding cycle. The competition between bacteria and protozoa for rumen La suggests a variable contribution of protozoa to total La utilization. However, the model was unable to simulate the effects of defaunation on rumen La metabolism, indicating a need for a more detailed description of protozoal metabolism. The model could form the basis of a feed evaluation system with regard to rumen La metabolism.