146 resultados para Error gravity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A version of the Canadian Middle Atmosphere Model (CMAM) that is nudged toward reanalysis data up to 1 hPa is used to examine the impacts of parameterized orographic and non-orographic gravity wave drag (OGWD and NGWD) on the zonal-mean circulation of the mesosphere during the extended northern winters of 2006 and 2009 when there were two large stratospheric sudden warmings. The simulations are compared to Aura Microwave Limb Sounder (MLS) observations of mesospheric temperature, carbon monoxide (CO) and derived zonal winds. The control simulation, which uses both OGWD and NGWD, is shown to be in good agreement with MLS. The impacts of OGWD and NGWD are assessed using simulations in which those sources of wave drag are removed. In the absence of OGWD the mesospheric zonal winds in the months preceding the warmings are too strong, causing increased mesospheric NGWD, which drives excessive downwelling, resulting in overly large lower mesospheric values of CO prior to the warming. NGWD is found to be most important following the warmings when the underlying westerlies are too weak to allow much vertical propagation of the orographic gravity waves to the mesosphere. NGWD is primarily responsible for driving the circulation that results in the descent of CO from the thermosphere following the warmings. Zonal mean mesospheric winds and temperatures in all simulations are shown to be strongly constrained by (i.e. slaved to) the stratosphere. Finally, it is demonstrated that the responses to OGWD and NGWD are non-additive due to their dependence and influence on the background winds and temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the VOCALS campaign spaceborne satellite observations showed that travelling gravity wave packets, generated by geostrophic adjustment, resulted in perturbations to marine boundary layer (MBL) clouds over the south-east Pacific Ocean (SEP). Often, these perturbations were reversible in that passage of the wave resulted in the clouds becoming brighter (in the wave crest), then darker (in the wave trough) and subsequently recovering their properties after the passage of the wave. However, occasionally the wave packets triggered irreversible changes to the clouds, which transformed from closed mesoscale cellular convection to open form. In this paper we use large eddy simulation (LES) to examine the physical mechanisms that cause this transition. Specifically, we examine whether the clearing of the cloud is due to (i) the wave causing additional cloud-top entrainment of warm, dry air or (ii) whether the additional condensation of liquid water onto the existing drops and the subsequent formation of drizzle are the important mechanisms. We find that, although the wave does cause additional drizzle formation, this is not the reason for the persistent clearing of the cloud; rather it is the additional entrainment of warm, dry air into the cloud followed by a reduction in longwave cooling, although this only has a significant effect when the cloud is starting to decouple from the boundary layer. The result in this case is a change from a stratocumulus to a more patchy cloud regime. For the simulations presented here, cloud condensation nuclei (CCN) scavenging did not play an important role in the clearing of the cloud. The results have implications for understanding transitions between the different cellular regimes in marine boundary layer (MBL) clouds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers supply dynamics in the context of the Irish residential market. The analysis, in a multiple error-correction framework, reveals that although developers did respond to disequilibrium in supply, the rate of adjustment was relatively slow. In contrast, however, disequilibrium in demand did not impact upon supply, suggesting that inelastic supply conditions could explain the prolonged nature of the boom in the Irish market. Increased elasticity in the later stages of the boom may have been a contributory factor in the extent of the house price falls observed in recent years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi-analytical expressions for the momentum flux associated with orographic internal gravity waves, and closed analytical expressions for its divergence, are derived for inviscid, stationary, hydrostatic, directionally-sheared flow over mountains with an elliptical horizontal cross-section. These calculations, obtained using linear theory conjugated with a third-order WKB approximation, are valid for relatively slowly-varying, but otherwise generic wind profiles, and given in a form that is straightforward to implement in drag parametrization schemes. When normalized by the surface drag in the absence of shear, a quantity that is calculated routinely in existing drag parametrizations, the momentum flux becomes independent of the detailed shape of the orography. Unlike linear theory in the Ri → ∞ limit, the present calculations account for shear-induced amplification or reduction of the surface drag, and partial absorption of the wave momentum flux at critical levels. Profiles of the normalized momentum fluxes obtained using this model and a linear numerical model without the WKB approximation are evaluated and compared for two idealized wind profiles with directional shear, for different Richardson numbers (Ri). Agreement is found to be excellent for the first wind profile (where one of the wind components varies linearly) down to Ri = 0.5, while not so satisfactory, but still showing a large improvement relative to the Ri → ∞ limit, for the second wind profile (where the wind turns with height at a constant rate keeping a constant magnitude). These results are complementary, in the Ri > O(1) parameter range, to Broad’s generalization of the Eliassen–Palm theorem to 3D flow. They should contribute to improve drag parametrizations used in global weather and climate prediction models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabatic processes can alter Rossby wave structure; consequently errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root mean square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter-season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead time up to about five days, consistent with under-representation of diabatic modification and transport of air from the lower troposphere into upper-tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, due to the Gravity Recovery And Climate Experiment (GRACE) mission and, more recently, the Gravity and steady state Ocean Circulation Explorer (GOCE) mission, our ability to measure the ocean’s mean dynamic topography (MDT) from space has improved dramatically. Here we use GOCE to measure surface current speeds in the North Atlantic and compare our results with a range of independent estimates that use drifter data to improve small scales. We find that, with filtering, GOCE can recover 70% of the Gulf Steam strength relative to the best drifter-based estimates. In the subpolar gyre the boundary currents obtained from GOCE are close to the drifter-based estimates. Crucial to this result is careful filtering which is required to remove small-scale errors, or noise, in the computed surface. We show that our heuristic noise metric, used to determine the degree of filtering, compares well with the quadratic sum of mean sea surface and formal geoid errors obtained from the error variance–covariance matrix associated with the GOCE gravity model. At a resolution of 100 km the North Atlantic mean GOCE MDT error before filtering is 5 cm with almost all of this coming from the GOCE gravity model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In probabilistic decision tasks, an expected value (EV) of a choice is calculated, and after the choice has been made, this can be updated based on a temporal difference (TD) prediction error between the EV and the reward magnitude (RM) obtained. The EV is measured as the probability of obtaining a reward x RM. To understand the contribution of different brain areas to these decision-making processes, functional magnetic resonance imaging activations related to EV versus RM (or outcome) were measured in a probabilistic decision task. Activations in the medial orbitofrontal cortex were correlated with both RM and with EV and confirmed in a conjunction analysis to extend toward the pregenual cingulate cortex. From these representations, TD reward prediction errors could be produced. Activations in areas that receive from the orbitofrontal cortex including the ventral striatum, midbrain, and inferior frontal gyrus were correlated with the TD error. Activations in the anterior insula were correlated negatively with EV, occurring when low reward outcomes were expected, and also with the uncertainty of the reward, implicating this region in basic and crucial decision-making parameters, low expected outcomes, and uncertainty.