985 resultados para FORECAST COMBINATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the extent to which long-horizon survey forecasts of consumption, investment and output growth are consistent with theory-based steady-state values, and whether imposing these restrictions on long-horizon forecasts will enhance their accuracy. The restrictions we impose are consistent with a two-sector model in which the variables grow at different rates in steady state. The restrictions are imposed by exponential-tilting of simple auxiliary forecast densities. We show that imposing the consumption-output restriction yields modest improvements in the long-horizon output growth forecasts, and larger improvements in the forecasts of the cointegrating combination of consumption and output: the transformation of the data on which accuracy is assessed plays an important role.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uncertainty of Arctic seasonal to interannual predictions arising from model errors and initial state uncertainty has been widely discussed in the literature, whereas the irreducible forecast uncertainty (IFU) arising from the chaoticity of the climate system has received less attention. However, IFU provides important insights into the mechanisms through which predictability is lost, and hence can inform prioritization of model development and observations deployment. Here, we characterize how internal oceanic and surface atmospheric heat fluxes contribute to IFU of Arctic sea ice and upper ocean heat content in an Earth system model by analyzing a set of idealized ensemble prediction experiments. We find that atmospheric and oceanic heat flux are often equally important for driving unpredictable Arctic-wide changes in sea ice and surface water temperatures, and hence contribute equally to IFU. Atmospheric surface heat flux tends to dominate Arctic-wide changes for lead times of up to a year, whereas oceanic heat flux tends to dominate regionally and on interannual time scales. There is in general a strong negative covariance between surface heat flux and ocean vertical heat flux at depth, and anomalies of lateral ocean heat transport are wind-driven, which suggests that the unpredictable oceanic heat flux variability is mainly forced by the atmosphere. These results are qualitatively robust across different initial states, but substantial variations in the amplitude of IFU exist. We conclude that both atmospheric variability and the initial state of the upper ocean are key ingredients for predictions of Arctic surface climate on seasonal to interannual time scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a great deal of recent interest in producing weather forecasts on the 2–6 week sub-seasonal timescale, which bridges the gap between medium-range (0–10 day) and seasonal (3–6 month) forecasts. While much of this interest is focused on the potential applications of skilful forecasts on the sub-seasonal range, understanding the potential sources of sub-seasonal forecast skill is a challenging and interesting problem, particularly because of the likely state-dependence of this skill (Hudson et al 2011). One such potential source of state-dependent skill for the Northern Hemisphere in winter is the occurrence of stratospheric sudden warming (SSW) events (Sigmond et al 2013). Here we show, by analysing a set of sub-seasonal hindcasts, that there is enhanced predictability of surface circulation not only when the stratospheric vortex is anomalously weak following SSWs but also when the vortex is extremely strong. Sub-seasonal forecasts initialized during strong vortex events are able to successfully capture the associated surface temperature and circulation anomalies. This results in an enhancement of Northern annular mode forecast skill compared to forecasts initialized during the cases when the stratospheric state is close to climatology. We demonstrate that the enhancement of skill for forecasts initialized during periods of strong vortex conditions is comparable to that achieved for forecasts initialized during weak events. This result indicates that additional confidence can be placed in sub-seasonal forecasts when the stratospheric polar vortex is significantly disturbed from its normal state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The collective representation within global models of aerosol, cloud, precipitation, and their radiative properties remains unsatisfactory. They constitute the largest source of uncertainty in predictions of climatic change and hamper the ability of numerical weather prediction models to forecast high-impact weather events. The joint European Space Agency (ESA)–Japan Aerospace Exploration Agency (JAXA) Earth Clouds, Aerosol and Radiation Explorer (EarthCARE) satellite mission, scheduled for launch in 2018, will help to resolve these weaknesses by providing global profiles of cloud, aerosol, precipitation, and associated radiative properties inferred from a combination of measurements made by its collocated active and passive sensors. EarthCARE will improve our understanding of cloud and aerosol processes by extending the invaluable dataset acquired by the A-Train satellites CloudSat, Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), and Aqua. Specifically, EarthCARE’s cloud profiling radar, with 7 dB more sensitivity than CloudSat, will detect more thin clouds and its Doppler capability will provide novel information on convection, precipitating ice particle, and raindrop fall speeds. EarthCARE’s 355-nm high-spectral-resolution lidar will measure directly and accurately cloud and aerosol extinction and optical depth. Combining this with backscatter and polarization information should lead to an unprecedented ability to identify aerosol type. The multispectral imager will provide a context for, and the ability to construct, the cloud and aerosol distribution in 3D domains around the narrow 2D retrieved cross section. The consistency of the retrievals will be assessed to within a target of ±10 W m–2 on the (10 km)2 scale by comparing the multiview broadband radiometer observations to the top-of-atmosphere fluxes estimated by 3D radiative transfer models acting on retrieved 3D domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (“high-top”) and models that do not (“low-top”). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (Dec-Mar) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño-Southern Oscillation/ENSO and the Quasi-biennial Oscillation/QBO) and how they relate to predictive skill on intra-seasonal to seasonal timescales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high-latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The therapeutic efficacy of amphotericin B and voriconazole alone and in combination with one another were evaluated in immunodeficient mice (BALB/c-SCID) infected with a fluconazole-resistant strain of Cryptococcus neoformans var. grubii. The animals were infected intravenously with 3 x 10(5) cells and intraperitoneally treated with amphotericin B (1.5 mg/kg/day) in combination with voriconazole (40 mg/kg/days). Treatment began 1 day after inoculation and continued for 7 and 15 days post-inoculation. The treatments were evaluated by survival curves and yeast quantification (CFUs) in brain and lung tissues. Treatments for 15 days significantly promoted the survival of the animals compared to the control groups. Our results indicated that amphotericin B was effective in assuring longest-term survival of infected animals, but these animals still harbored the highest CFU of C. neoformans in lungs and brain at the end of the experiment. Voriconazole was not as effective alone, but in combination with amphotericin B, it prolonged survival for the second-longest time period and provided the lowest colonization of target organs by the fungus. None of the treatments were effective in complete eradication of the fungus in mice lungs and brain at the end of the experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several real problems involve the classification of data into categories or classes. Given a data set containing data whose classes are known, Machine Learning algorithms can be employed for the induction of a classifier able to predict the class of new data from the same domain, performing the desired discrimination. Some learning techniques are originally conceived for the solution of problems with only two classes, also named binary classification problems. However, many problems require the discrimination of examples into more than two categories or classes. This paper presents a survey on the main strategies for the generalization of binary classifiers to problems with more than two classes, known as multiclass classification problems. The focus is on strategies that decompose the original multiclass problem into multiple binary subtasks, whose outputs are combined to obtain the final prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photodynamic therapy, used mainly for cancer treatment and microorganisms inaction, is based on production of reactive oxygen species by light irradiation of a sensitizer. Hematoporphyrin derivatives as Photofrin (R) (PF) Photogem (R) (PG) and Photosan (R) (PF), and chlorin-c6-derivatives as Photodithazine (R)(PZ), have suitable sensitizing properties. The present study provides a way to make a fast previous evaluation of photosensitizers efficacy by a combination of techniques: a) use of brovine serum albumin and uric acid as chemical dosimeters; b) photo-hemolysis of red blood cells used as a cell membrane interaction model, and c) octanol/phosphate buffer partition to assess the relative lipophilicity of the compounds. The results suggest the photodynamic efficient rankings PZ > PG >= PF > PS. These results agree with the cytotoxicity of the photosensitizers as well as to chromatographic separation of the HpDs, both performed in our group, showing that the more lipophilic is the dye, the more acute is the damage to the RBC membrane and the oxidation of indol, which is immersed in the hydrophobic region of albumin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fragmentation mechanisms of singlet oxygen [O(2) ((1)Delta(g))]-derived oxidation products of tryptophan (W) were analyzed using collision-induced dissociation coupled with (18)O-isotopic labeling experiments and accurate mass measurements. The five identified oxidized products, namely two isomeric alcohols (trans and cis WOH), two isomeric hydroperoxides (trans and cis WOOH), and N-formylkynurenine (FMK), were shown to share some common fragment ions and losses of small neutral molecules. Conversely, each oxidation product has its own fragmentation mechanism and intermediates, which were confirmed by (18)O-labeling studies. Isomeric WOH lost mainly H(2)O + CO, while WOOH showed preferential elimination of C(2)H(5)NO(3) by two distinct mechanisms. Differences in the spatial arrangement of the two isomeric WOHs led to differences in the intensities of the fragment ions. The same behavior was also found for trans and cis WOOH. FMK was shown to dissociate by a diverse range of mechanisms, with the loss of ammonia the most favored route. MS/MS analyses, (18)O-labeling, and H(2)(18)O experiments demonstrated the ability of FMK to exchange its oxygen atoms with water. Moreover, this approach also revealed that the carbonyl group has more pronounced oxygen exchange ability compared with the formyl group. The understanding of fragmentation mechanisms involved in O(2) ((1)Delta(g))-mediated oxidation of W provides a useful step toward the structural characterization of oxidized peptides and proteins. (J Am Soc Mass Spectrom 2009, 20, 188-197) (C) 2009 Published by Elsevier Inc. on behalf of American Society for Mass Spectrometry

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polynorbonerne with high molecular weight was obtained via ring opening metathesis polymerization using catalysts derived from [RuCl(2)(PPh(2)Bz)(2) L] (1 for L = PPh(2) Bz; 2 for L = piperidine) type of complexes when in the presence of ethyl diazoacetate in CHCl(3). The polymer precipitated within a few minutes at 50 degrees C when using 1 with ca. 50% yield ([NBE]/[Ru] = 5000). Regarding 2, for either 30 min at 25 C or 5 min at 50 degrees C, more than 90% of yields are obtained; and at 50 C for 30 min a quantitative yield is obtained. The yield and PDI values are sensitive to the [NBE]/[Ru] ratio. The reaction of 1 with either isonicotinamide or nicotinamide produces six-coordinated complexes of [RuCl(2)(PPh(2)Bz)(2)(L)(2)] type, which are almost inactive and produce only small amounts of polymers at 50 C for 30 min. Thus, we Concluded that the novel complexes show very distinct reactivities for ROMP of NBE. This has been rationalized on account of a combination of synergistic effects of the phosphine-amine ancillary ligands. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pulp- and paper production is a very energy intensive industry sector. Both Sweden and the U.S. are major pulpandpaper producers. This report examines the energy and the CO2-emission connected with the pulp- and paperindustry for the two countries from a lifecycle perspective.New technologies make it possible to increase the electricity production in the integrated pulp- andpaper mill through black liquor gasification and a combined cycle (BLGCC). That way, the mill canproduce excess electricity, which can be sold and replace electricity produced in power plants. In thisprocess the by-products that are formed at the pulp-making process is used as fuel to produce electricity.In pulp- and paper mills today the technology for generating energy from the by-product in aTomlinson boiler is not as efficient as it could be compared to the BLGCC technology. Scenarios havebeen designed to investigate the results from using the BLGCC technique using a life cycle analysis.Two scenarios are being represented by a 1994 mill in the U.S. and a 1994 mill in Sweden.The scenariosare based on the average energy intensity of pulp- and paper mills as operating in 1994 in the U.S.and Sweden respectively. The two other scenarios are constituted by a »reference mill« in the U.S. andSweden using state-of-the-art technology. We investigate the impact of varying recycling rates and totalenergy use and CO2-emissions from the production of printing and writing paper. To economize withthe wood and that way save trees, we can use the trees that are replaced by recycling in a biomassgasification combined cycle (BIGCC) to produce electricity in a power station. This produces extra electricitywith a lower CO2 intensity than electricity generated by, for example, coal-fired power plants.The lifecycle analysis in this thesis also includes the use of waste treatment in the paper lifecycle. Both Sweden and theU.S. are countries that recycle paper. Still there is a lot of paper waste, this paper is a part of the countries municipalsolid waste (MSW). A lot of the MSW is landfilled, but parts of it are incinerated to extract electricity. The thesis hasdesigned special scenarios for the use of MSW in the lifecycle analysis.This report is studying and comparing two different countries and two different efficiencies on theBLGCC in four different scenarios. This gives a wide survey and points to essential parameters to specificallyreflect on, when making assumptions in a lifecycle analysis. The report shows that there arethree key parameters that have to be carefully considered when making a lifecycle analysis of wood inan energy and CO2-emission perspective in the pulp- and paper mill in the U.S. and in Sweden. First,there is the energy efficiency in the pulp- and paper mill, then the efficiency of the BLGCC and last theCO2 intensity of the electricity displaced by BIGCC or BLGCC generatedelectricity. It also show that with the current technology that we havetoday, it is possible to produce CO2 free paper with a waste paper amountup to 30%. The thesis discusses the system boundaries and the assumptions.Further and more detailed research, including amongst others thesystem boundaries and forestry, is recommended for more specificanswers.