151 resultados para Classical measurement error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cold equatorial SST bias in the tropical Pacific that is persistent in many coupled OAGCMs severely impacts the fidelity of the simulated climate and variability in this key region, such as the ENSO phenomenon. The classical bias analysis in these models usually concentrates on multi-decadal to centennial time series needed to obtain statistically robust features. Yet, this strategy cannot fully explain how the models errors were generated in the first place. Here, we use seasonal re-forecasts (hindcasts) to track back the origin of this cold bias. As such hindcasts are initialized close to observations, the transient drift leading to the cold bias can be analyzed to distinguish pre-existing errors from errors responding to initial ones. A time sequence of processes involved in the advent of the final mean state errors can then be proposed. We apply this strategy to the ENSEMBLES-FP6 project multi-model hindcasts of the last decades. Four of the five AOGCMs develop a persistent equatorial cold tongue bias within a few months. The associated systematic errors are first assessed separately for the warm and cold ENSO phases. We find that the models are able to reproduce either El Niño or La Niña close to observations, but not both. ENSO composites then show that the spurious equatorial cooling is maximum for El Niño years for the February and August start dates. For these events and at this time of the year, zonal wind errors in the equatorial Pacific are present from the beginning of the simulation and are hypothesized to be at the origin of the equatorial cold bias, generating too strong upwelling conditions. The systematic underestimation of the mixed layer depth in several models can also amplify the growth of the SST bias. The seminal role of these zonal wind errors is further demonstrated by carrying out ocean-only experiments forced by the AOCGCMs daily 10-meter wind. In a case study, we show that for several models, this forcing is sufficient to reproduce the main SST error patterns seen after 1 month in the AOCGCM hindcasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Red tape is not desirable as it impedes business growth. Relief from the administrative burdens that businesses face due to legislation can benefit the whole economy, especially at times of recession. However, recent governmental initiatives aimed at reducing administrative burdens have encountered some success, but also failures. This article compares three national initiatives - in the Netherlands, UK and Italy - aimed at cutting red tape by using the Standard Cost Model. Findings highlight the factors affecting the outcomes of measurement and reduction plans and ways to improve the Standard Cost Model methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In mid-March 2005 the northern lower stratospheric polar vortex experienced a severe stretching episode, bringing a large polar filament far south of Alaska toward Hawaii. This meridional intrusion of rare extent, coinciding with the polar vortex final warming and breakdown, was followed by a zonal stretching in the wake of the easterly propagating subtropical main flow. This caused polar air to remain over Hawaii for several days before diluting into the subtropics. After being successfully forecasted to pass over Hawaii by the high-resolution potential vorticity advection model Modèle Isentrope du transport Méso-échelle de l'Ozone Stratosphérique par Advection (MIMOSA), the filament was observed on isentropic surfaces between 415 K and 455 K (17–20 km) by the Jet Propulsion Laboratory stratospheric ozone lidar measurements at Mauna Loa Observatory, Hawaii, between 16 and 19 March 2005. It was materialized as a thin layer of enhanced ozone peaking at 1.6 ppmv in a region where the climatological values usually average 1.0 ppmv. These values were compared to those obtained by the three-dimensional Chemistry-Transport Model MIMOSA-CHIM. Agreement between lidar and model was excellent, particularly in the similar appearance of the ozone peak near 435 K (18.5 km) on 16 March, and the persistence of this layer at higher isentropic levels for the following three days. Passive ozone, also modeled by MIMOSA-CHIM, was at about 3–4 ppmv inside the filament while above Hawaii. A detailed history of the modeled chemistry inside the filament suggests that the air mass was still polar ozone–depleted when passing over Hawaii. The filament quickly separated from the main vortex after its Hawaiian overpass. It never reconnected and, in less than 10 days, dispersed entirely in the subtropics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During a series of 8 measurement campaigns within the SPURT project (2001-2003), vertical profiles of CO and O3 have been obtained at subtropical, middle and high latitudes over western Europe, covering the troposphere and lowermost stratosphere up to ~14 km altitude during all seasons. The seasonal and latitudinal variation of the measured trace gas profiles are compared to simulations with the chemical transport model MATCH. In the troposphere reasonable agreement between observations and model predictions is achieved for CO and O3, in particular at subtropical and mid-latitudes, while the model overestimates (underestimates) CO (O3 in the lowermost stratosphere particularly at high latitudes, indicating too strong simulated bi-directional exchange across the tropopause. By the use of tagged tracers in the model, long-range transport of Asian air masses is identified as the dominant source of CO pollution over Europe in the free troposphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a model of the local field potential (LFP) based on the conservation of charge, the independence principle of ionic flows and the classical Hodgkin–Huxley (HH) type intracellular model of synaptic activity. Insights were gained through the simulation of the HH intracellular model on the nonlinear relationship between the balance of synaptic conductances and that of post-synaptic currents. The latter is dependent not only on the former, but also on the temporal lag between the excitatory and inhibitory conductances, as well as the strength of the afferent signal. The proposed LFP model provides a method for decomposing the LFP recordings near the soma of layer IV pyramidal neurons in the barrel cortex of anaesthetised rats into two highly correlated components with opposite polarity. The temporal dynamics and the proportional balance of the two components are comparable to the excitatory and inhibitory post-synaptic currents computed from the HH model. This suggests that the two components of the LFP reflect the underlying excitatory and inhibitory post-synaptic currents of the local neural population. We further used the model to decompose a sequence of evoked LFP responses under repetitive electrical stimulation (5 Hz) of the whisker pad. We found that as neural responses adapted, the excitatory and inhibitory components also adapted proportionately, while the temporal lag between the onsets of the two components increased during frequency adaptation. Our results demonstrated that the balance between neural excitation and inhibition can be investigated using extracellular recordings. Extension of the model to incorporate multiple compartments should allow more quantitative interpretations of surface Electroencephalography (EEG) recordings into components reflecting the excitatory, inhibitory and passive ionic current flows generated by local neural populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally functional magnetic resonance imaging (fMRI) has been used to map activity in the human brain by measuring increases in the Blood Oxygenation Level Dependent (BOLD) signal. Often accompanying positive BOLD fMRI signal changes are sustained negative signal changes. Previous studies investigating the neurovascular coupling mechanisms of the negative BOLD phenomenon have used concurrent 2D-optical imaging spectroscopy (2D-OIS) and electrophysiology (Boorman et al., 2010). These experiments suggested that the negative BOLD signal in response to whisker stimulation was a result of an increase in deoxy-haemoglobin and reduced multi-unit activity in the deep cortical layers. However, Boorman et al. (2010) did not measure the BOLD and haemodynamic response concurrently and so could not quantitatively compare either the spatial maps or the 2D-OIS and fMRI time series directly. Furthermore their study utilised a homogeneous tissue model in which is predominantly sensitive to haemodynamic changes in more superficial layers. Here we test whether the 2D-OIS technique is appropriate for studies of negative BOLD. We used concurrent fMRI with 2D-OIS techniques for the investigation of the haemodynamics underlying the negative BOLD at 7 Tesla. We investigated whether optical methods could be used to accurately map and measure the negative BOLD phenomenon by using 2D-OIS haemodynamic data to derive predictions from a biophysical model of BOLD signal changes. We showed that despite the deep cortical origin of the negative BOLD response, if an appropriate heterogeneous tissue model is used in the spectroscopic analysis then 2D-OIS can be used to investigate the negative BOLD phenomenon.