887 resultados para Bias-corrected average forecast


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assimilation of physical variables into coupled physical/biogeochemical models poses considerable difficulties. One problem is that data assimilation can break relationships between physical and biological variables. As a consequence, biological tracers, especially nutrients, are incorrectly displaced in the vertical, resulting in unrealistic biogeochemical fields. To prevent this, we present the idea of applying an increment to the nutrient field within a data assimilating model to ensure that nutrient-potential density relationships are maintained within a water column during assimilation. After correcting the nutrients, it is assumed that other biological variables rapidly adjust to the corrected nutrient fields. We applied this method to a 17 year run of the 2° NEMO ocean-ice model coupled to the PlankTOM5 ecosystem model. Results were compared with a control with no assimilation, and with a model with physical assimilation but no nutrient increment. In the nutrient incrementing experiment, phosphate distributions were improved both at high latitudes and at the equator. At midlatitudes, assimilation generated unrealistic advective upwelling of nutrients within the boundary currents, which spread into the subtropical gyres resulting in more biased nutrient fields. This result was largely unaffected by the nutrient increment and is probably due to boundary currents being poorly resolved in a 2° model. Changes to nutrient distributions fed through into other biological parameters altering primary production, air-sea CO2 flux, and chlorophyll distributions. These secondary changes were most pronounced in the subtropical gyres and at the equator, which are more nutrient limited than high latitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the significance of forecasting in real estate investment decisions, this paper investigates forecast uncertainty and disagreement in real estate market forecasts. Using the Investment Property Forum (IPF) quarterly survey amongst UK independent real estate forecasters, these real estate forecasts are compared with actual real estate performance to assess a number of real estate forecasting issues in the UK over 1999-2004, including real estate forecast error, bias and consensus. The results suggest that real estate forecasts are biased, less volatile compared to market returns and inefficient in that forecast errors tend to persist. The strongest finding is that real estate forecasters display the characteristics associated with a consensus indicating herding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluates the use of European Centre for Medium-Range Weather Forecasts (ECMWF) products in monitoring and forecasting drought conditions during the recent 2010–2011 drought in the Horn of Africa (HoA). The region was affected by a precipitation deficit in both the October–December 2010 and March–May 2011 rainy seasons. These anomalies were captured by the ERA-Interim reanalysis (ERAI), despite its limitations in representing the March–May interannual variability. Soil moisture anomalies of ERAI also identified the onset of the drought condition early in October 2010 with a persistent drought still present in September 2011. This signal was also evident in normalized difference vegetation index (NDVI) remote sensing data. The precipitation deficit in October–December 2010 was associated with a strong La Niña event. The ECMWF seasonal forecasts for the October–December 2010 season predicted the La Niña event from June 2010 onwards. The forecasts also predicted a below-average October–December rainfall, from July 2010 onwards. The subsequent March–May rainfall anomaly was only captured by the new ECWMF seasonal forecast system in the forecasts starting in March 2011. Our analysis shows that a recent (since 1999) drying in the region during the March–May season is captured by the new ECMWF seasonal forecast system and is consistent with recently published results. The HoA region and its population are highly vulnerable to future droughts, thus global monitoring and forecasting of drought, such as that presented here, will become increasingly important in the future. Copyright © 2012 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In mid-March 2005 the northern lower stratospheric polar vortex experienced a severe stretching episode, bringing a large polar filament far south of Alaska toward Hawaii. This meridional intrusion of rare extent, coinciding with the polar vortex final warming and breakdown, was followed by a zonal stretching in the wake of the easterly propagating subtropical main flow. This caused polar air to remain over Hawaii for several days before diluting into the subtropics. After being successfully forecasted to pass over Hawaii by the high-resolution potential vorticity advection model Modèle Isentrope du transport Méso-échelle de l'Ozone Stratosphérique par Advection (MIMOSA), the filament was observed on isentropic surfaces between 415 K and 455 K (17–20 km) by the Jet Propulsion Laboratory stratospheric ozone lidar measurements at Mauna Loa Observatory, Hawaii, between 16 and 19 March 2005. It was materialized as a thin layer of enhanced ozone peaking at 1.6 ppmv in a region where the climatological values usually average 1.0 ppmv. These values were compared to those obtained by the three-dimensional Chemistry-Transport Model MIMOSA-CHIM. Agreement between lidar and model was excellent, particularly in the similar appearance of the ozone peak near 435 K (18.5 km) on 16 March, and the persistence of this layer at higher isentropic levels for the following three days. Passive ozone, also modeled by MIMOSA-CHIM, was at about 3–4 ppmv inside the filament while above Hawaii. A detailed history of the modeled chemistry inside the filament suggests that the air mass was still polar ozone–depleted when passing over Hawaii. The filament quickly separated from the main vortex after its Hawaiian overpass. It never reconnected and, in less than 10 days, dispersed entirely in the subtropics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using annual observations on industrial production over the last three centuries, and on GDP over a 100-year period, we seek an historical perspective on the forecastability of these UK output measures. The series are dominated by strong upward trends, so we consider various specifications of this, including the local linear trend structural time-series model, which allows the level and slope of the trend to vary. Our results are not unduly sensitive to how the trend in the series is modelled: the average sizes of the forecast errors of all models, and the wide span of prediction intervals, attests to a great deal of uncertainty in the economic environment. It appears that, from an historical perspective, the postwar period has been relatively more forecastable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05% per atomic mass unit. The method was tested on aerosols collected in Sin Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37% in coarse and between -1.04 and 0.02% in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta 66Z(Imperial) = 0.26 +/- 0.10%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consumers often pay different prices for the same product bought in the same store at the same time. However, the demand estimation literature has ignored that fact using, instead, aggregate measures such as the “list” or average price. In this paper we show that this will lead to biased price coefficients. Furthermore, we perform simple comparative statics simulation exercises for the logit and random coefficient models. In the “list” price case we find that the bias is larger when discounts are higher, proportion of consumers facing discount prices is higher and when consumers are more unwilling to buy the product so that they almost only do it when facing discount. In the average price case we find that the bias is larger when discounts are higher, proportion of consumers that have access to discount are similar to the ones that do not have access and when consumers willingness to buy is very dependent on idiosyncratic shocks. Also bias is less problematic in the average price case in markets with a lot of bargain deals, so that prices are as good as individual. We conclude by proposing ways that the econometrician can reduce this bias using different information that he may have available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A main purpose of a mathematical nutrition model (a.k.a., feeding systems) is to provide a mathematical approach for determining the amount and composition of the diet necessary for a certain level of animal productive performance. Therefore, feeding systems should be able to predict voluntary feed intake and to partition nutrients into different productive functions and performances. In the last decades, several feeding systems for goats have been developed. The objective of this paper is to compare and evaluate the main goat feeding systems (AFRC, CSIRO, NRC, and SRNS), using data of individual growing goat kids from seven studies conducted in Brazil. The feeding systems were evaluated by regressing the residuals (observed minus predicted) on the predicted values centered on their means. The comparisons showed that these systems differ in their approach for estimating dry matter intake (DMI) and energy requirements for growing goats. The AFRC system was the most accurate for predicting DMI (mean bias = 91 g/d, P < 0.001; linear bias 0.874). The average ADG accounted for a large part of the bias in the prediction of DMI by CSIRO, NRC, and, mainly, AFRC systems. The CSIRO model gave the most accurate predictions of ADG when observed DMI was used as input in the models (mean bias 12 g/d, P < 0.001; linear bias -0.229). while the AFRC was the most accurate when predicted DMI was used (mean bias 8g/d. P > 0.1; linear bias -0.347). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Genética e Melhoramento Animal - FCAV

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The World Health Organization estimates that in sub-Saharan Africa about 4 million HIV-infected patients had started antiretroviral therapy (ART) by the end of 2008. Loss of patients to follow-up and care is an important problem for treatment programmes in this region. As mortality is high in these patients compared to patients remaining in care, ART programmes with high rates of loss to follow-up may substantially underestimate mortality of all patients starting ART. Methods and Findings We developed a nomogram to correct mortality estimates for loss to follow-up, based on the fact that mortality of all patients starting ART in a treatment programme is a weighted average of mortality among patients lost to follow-up and patients remaining in care. The nomogram gives a correction factor based on the percentage of patients lost to follow-up at a given point in time, and the estimated ratio of mortality between patients lost and not lost to follow-up. The mortality observed among patients retained in care is then multiplied by the correction factor to obtain an estimate of programme-level mortality that takes all deaths into account. A web calculator directly calculates the corrected, programme-level mortality with 95% confidence intervals (CIs). We applied the method to 11 ART programmes in sub-Saharan Africa. Patients retained in care had a mortality at 1 year of 1.4% to 12.0%; loss to follow-up ranged from 2.8% to 28.7%; and the correction factor from 1.2 to 8.0. The absolute difference between uncorrected and corrected mortality at 1 year ranged from 1.6% to 9.8%, and was above 5% in four programmes. The largest difference in mortality was in a programme with 28.7% of patients lost to follow-up at 1 year. Conclusions The amount of bias in mortality estimates can be large in ART programmes with substantial loss to follow-up. Programmes should routinely report mortality among patients retained in care and the proportion of patients lost. A simple nomogram can then be used to estimate mortality among all patients who started ART, for a range of plausible mortality rates among patients lost to follow-up.