871 resultados para Prediction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictability of high impact weather events on multiple time scales is a crucial issue both in scientific and socio-economic terms. In this study, a statistical-dynamical downscaling (SDD) approach is applied to an ensemble of decadal hindcasts obtained with the Max-Planck-Institute Earth System Model (MPI-ESM) to estimate the decadal predictability of peak wind speeds (as a proxy for gusts) over Europe. Yearly initialized decadal ensemble simulations with ten members are investigated for the period 1979–2005. The SDD approach is trained with COSMO-CLM regional climate model simulations and ERA-Interim reanalysis data and applied to the MPI-ESM hindcasts. The simulations for the period 1990–1993, which was characterized by several windstorm clusters, are analyzed in detail. The anomalies of the 95 % peak wind quantile of the MPI-ESM hindcasts are in line with the positive anomalies in reanalysis data for this period. To evaluate both the skill of the decadal predictability system and the added value of the downscaling approach, quantile verification skill scores are calculated for both the MPI-ESM large-scale wind speeds and the SDD simulated regional peak winds. Skill scores are predominantly positive for the decadal predictability system, with the highest values for short lead times and for (peak) wind speeds equal or above the 75 % quantile. This provides evidence that the analyzed hindcasts and the downscaling technique are suitable for estimating wind and peak wind speeds over Central Europe on decadal time scales. The skill scores for SDD simulated peak winds are slightly lower than those for large-scale wind speeds. This behavior can be largely attributed to the fact that peak winds are a proxy for gusts, and thus have a higher variability than wind speeds. The introduced cost-efficient downscaling technique has the advantage of estimating not only wind speeds but also estimates peak winds (a proxy for gusts) and can be easily applied to large ensemble datasets like operational decadal prediction systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Elucidating the biological and biochemical roles of proteins, and subsequently determining their interacting partners, can be difficult and time consuming using in vitro and/or in vivo methods, and consequently the majority of newly sequenced proteins will have unknown structures and functions. However, in silico methods for predicting protein–ligand binding sites and protein biochemical functions offer an alternative practical solution. The characterisation of protein–ligand binding sites is essential for investigating new functional roles, which can impact the major biological research spheres of health, food, and energy security. In this review we discuss the role in silico methods play in 3D modelling of protein–ligand binding sites, along with their role in predicting biochemical functionality. In addition, we describe in detail some of the key alternative in silico prediction approaches that are available, as well as discussing the Critical Assessment of Techniques for Protein Structure Prediction (CASP) and the Continuous Automated Model EvaluatiOn (CAMEO) projects, and their impact on developments in the field. Furthermore, we discuss the importance of protein function prediction methods for tackling 21st century problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ocean prediction systems are now able to analyse and predict temperature, salinity and velocity structures within the ocean by assimilating measurements of the ocean’s temperature and salinity into physically based ocean models. Data assimilation combines current estimates of state variables, such as temperature and salinity, from a computational model with measurements of the ocean and atmosphere in order to improve forecasts and reduce uncertainty in the forecast accuracy. Data assimilation generally works well with ocean models away from the equator but has been found to induce vigorous and unrealistic overturning circulations near the equator. A pressure correction method was developed at the University of Reading and the Met Office to control these circulations using ideas from control theory and an understanding of equatorial dynamics. The method has been used for the last 10 years in seasonal forecasting and ocean prediction systems at the Met Office and European Center for Medium-range Weather Forecasting (ECMWF). It has been an important element in recent re-analyses of the ocean heat uptake that mitigates climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying predictability and the corresponding sources for the western North Pacific (WNP) summer climate in the case of non-stationary teleconnections during recent decades benefits for further improvements of long-range prediction on the WNP and East Asian summers. In the past few decades, pronounced increases on the summer sea surface temperature (SST) and associated interannual variability are observed over the tropical Indian Ocean and eastern Pacific around the late 1970s and over the Maritime Continent and western–central Pacific around the early 1990s. These increases are associated with significant enhancements of the interannual variability for the lower-tropospheric wind over the WNP. In this study, we further assess interdecadal changes on the seasonal prediction of the WNP summer anomalies, using May-start retrospective forecasts from the ENSEMBLES multi-model project in the period 1960–2005. It is found that prediction of the WNP summer anomalies exhibits an interdecadal shift with higher prediction skills since the late 1970s, particularly after the early 1990s. Improvements of the prediction skills for SSTs after the late 1970s are mainly found around tropical Indian Ocean and the WNP. The better prediction of the WNP after the late 1970s may arise mainly from the improvement of the SST prediction around the tropical eastern Indian Ocean. The close teleconnections between the tropical eastern Indian Ocean and WNP summer variability work both in the model predictions and observations. After the early 1990s, on the other hand, the improvements are detected mainly around the South China Sea and Philippines for the lower-tropospheric zonal wind and precipitation anomalies, associating with a better description of the SST anomalies around the Maritime Continent. A dipole SST pattern over the Maritime Continent and the central equatorial Pacific Ocean is closely related to the WNP summer anomalies after the early 1990s. This teleconnection mode is quite predictable, which is realistically reproduced by the models, presenting more predictable signals to the WNP summer climate after the early 1990s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seasonal forecast skill of the basinwide and regional tropical cyclone (TC) activity in an experimental coupled prediction system based on the ECMWF System 4 is assessed. As part of a collaboration between the Center for Ocean–Land–Atmosphere Studies (COLA) and the ECMWF called Project Minerva, the system is integrated at the atmospheric horizontal spectral resolutions of T319, T639, and T1279. Seven-month hindcasts starting from 1 May for the years 1980–2011 are produced at all three resolutions with at least 15 ensemble members. The Minerva system demonstrates statistically significant skill for retrospective forecasts of TC frequency and accumulated cyclone energy (ACE) in the North Atlantic (NA), eastern North Pacific (EP), and western North Pacific. While the highest scores overall are achieved in the North Pacific, the skill in the NA appears to be limited by an overly strong influence of the tropical Pacific variability. Higher model resolution improves skill scores for the ACE and, to a lesser extent, the TC frequency, even though the influence of large-scale climate variations on these TC activity measures is largely independent of resolution changes. The biggest gain occurs in transition from T319 to T639. Significant skill in regional TC forecasts is achieved over broad areas of the Northern Hemisphere. The highest-resolution hindcasts exhibit additional locations with skill in the NA and EP, including land-adjacent areas. The feasibility of regional intensity forecasts is assessed. In the presence of the coupled model biases, the benefits of high resolution for seasonal TC forecasting may be underestimated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development and basic evaluation of decadal predictions produced using the HiGEM coupled climate model. HiGEM is a higher resolution version of the HadGEM1 Met Office Unified Model. The horizontal resolution in HiGEM has been increased to 1.25◦ × 0.83◦ in longitude and latitude for the atmosphere, and 1/3◦ × 1/3◦ globally for the ocean. The HiGEM decadal predictions are initialised using an anomaly assimilation scheme that relaxes anomalies of ocean temperature and salinity to observed anomalies. 10 year hindcasts are produced for 10 start dates (1960, 1965,..., 2000, 2005). To determine the relative contributions to prediction skill from initial conditions and external forcing, the HiGEM decadal predictions are compared to uninitialised HiGEM transient experiments. The HiGEM decadal predictions have substantial skill for predictions of annual mean surface air temperature and 100 m upper ocean temperature. For lead times up to 10 years, anomaly correlations (ACC) over large areas of the North Atlantic Ocean, the Western Pacific Ocean and the Indian Ocean exceed values of 0.6. Initialisation of the HiGEM decadal predictions significantly increases skill over regions of the Atlantic Ocean,the Maritime Continent and regions of the subtropical North and South Pacific Ocean. In particular, HiGEM produces skillful predictions of the North Atlantic subpolar gyre for up to 4 years lead time (with ACC > 0.7), which are significantly larger than the uninitialised HiGEM transient experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, a group contribution method is proposed for the estimation of viscosity of fatty compounds and biodiesel esters as a function of the temperature. The databank used for regression of the group contribution parameters (1070 values for 65 types of substances) included fatty compounds, such as fatty acids, methyl and ethyl esters and alcohols, tri- and diacylglycerols, and glycerol. The inclusion of new experimental data for fatty esters, a partial acylglycerol, and glycerol allowed for a further refinement in the performance of this methodology in comparison to a prior group contribution equation (Ceriani, R.; Goncalves, C. B.; Rabelo, J.; Caruso, M.; Cunha, A. C. C.; Cavaleri, F. W.; Batista, E. A. C.; Meirelles, A. J. A. Group contribution model for predicting viscosity of fatty compounds. J. Chem. Eng. Data 2007, 52, 965-972) for all classes of fatty compounds. Besides, the influence of small concentrations of partial acylglycerols, intermediate compounds in the transesterification reaction, in the viscosity of biodiesels was also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed climatology of the cyclogenesis over the Southern Atlantic Ocean (SAO) from 1990 to 1999 and how it is simulated by the RegCM3 (Regional Climate Model) is presented here. The simulation used as initial and boundary conditions the National Centers for Environmental Prediction-Department of Energy (NCEP/DOE) reanalysis. The cyclones were identified with an automatic scheme that searches for cyclonic relative vorticity (zeta(10)) obtained from a 10-m height wind field. All the systems with zeta(10) a parts per thousand currency sign -1.5 x 10(-5) s(-1) and lifetime equal or larger than 24 h were considered in the climatology. Over SAO, in 10 years were detected 2,760 and 2,787 cyclogeneses in the simulation and NCEP, respectively, with an annual mean of 276.0 +/- A 11.2 and 278.7 +/- A 11.1. This result suggests that the RegCM3 has a good skill to simulate the cyclogenesis climatology. However, the larger model underestimations (-9.8%) are found for the initially stronger systems (zeta(10) a parts per thousand currency sign -2.5 x 10(-5) s(-1)). It was noted that over the SAO the annual cycle of the cyclogenesis depends of its initial intensity. Considering the systems initiate with zeta(10) a parts per thousand currency sign -1.5 x 10(-5) s(-1), the annual cycle is not well defined and the higher frequency occurs in the autumn (summer) in the NCEP (RegCM3). The stronger systems (zeta(10) a parts per thousand currency sign -2.5 x 10(-5) s(-1)) have a well-characterized high frequency of cyclogenesis during the winter in both NCEP and RegCM3. This work confirms the existence of three cyclogenetic regions in the west sector of the SAO, near the South America east coast and shows that RegCM3 is able to reproduce the main features of these cyclogenetic areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional Climate Model version 3 (RegCM3) simulations of 17 summers (1988-2004) over part of South America south of 5 degrees S were evaluated to identify model systematic errors. Model results were compared to different rainfall data sets (Climate Research Unit (CRU), Climate Prediction Center (CPC), Global Precipitation Climatology Project (GPCP), and National Centers for Environmental Prediction (NCEP) reanalysis), including the five summers mean (1998-2002) precipitation diurnal cycle observed by the Tropical Rainfall Measuring Mission (TRMM)-Precipitation Radar (PR). In spite of regional differences, the RegCM3 simulates the main observed aspects of summer climatology associated with the precipitation (northwest-southeast band of South Atlantic Convergence Zone (SACZ)) and air temperature (warmer air in the central part of the continent and colder in eastern Brazil and the Andes Mountains). At a regional scale, the main RegCM3 failures are the underestimation of the precipitation in the northern branch of the SACZ and some unrealistic intense precipitation around the Andes Mountains. However, the RegCM3 seasonal precipitation is closer to the fine-scale analyses (CPC, CRU, and TRMM-PR) than is the NCEP reanalysis, which presents an incorrect north-south orientation of SACZ and an overestimation of its intensity. The precipitation diurnal cycle observed by TRMM-PR shows pronounced contrasts between Tropics and Extratropics and land and ocean, where most of these features are simulated by RegCM3. The major similarities between the simulation and observation, especially the diurnal cycle phase, are found over the continental tropical and subtropical SACZ regions, which present afternoon maximum (1500-1800 UTC) and morning minimum (0900-1200 UTC). More specifically, over the core of SACZ, the phase and amplitude of the simulated precipitation diurnal cycle are very close to the TRMM-PR observations. Although there are amplitude differences, the RegCM3 simulates the observed nighttime rainfall in the eastern Andes Mountains, over the Atlantic Ocean, and also over northern Argentina. The main simulation deficiencies are found in the Atlantic Ocean and near the Andes Mountains. Over the Atlantic Ocean the convective scheme is not triggered; thus the rainfall arises from the grid-scale scheme and therefore differs from the TRMM-PR. Near the Andes, intense (nighttime and daytime) simulated precipitation could be a response of an incorrect circulation and topographic uplift. Finally, it is important to note that unlike most reported bias of global models, RegCM3 does not trigger the moist convection just after sunrise over the southern part of the Amazon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.