945 resultados para Physically-based animation
Resumo:
The mechanisms involved in Atlantic meridional overturning circulation (AMOC) decadal variability and predictability over the last 50 years are analysed in the IPSL–CM5A–LR model using historical and initialised simulations. The initialisation procedure only uses nudging towards sea surface temperature anomalies with a physically based restoring coefficient. When compared to two independent AMOC reconstructions, both the historical and nudged ensemble simulations exhibit skill at reproducing AMOC variations from 1977 onwards, and in particular two maxima occurring respectively around 1978 and 1997. We argue that one source of skill is related to the large Mount Agung volcanic eruption starting in 1963, which reset an internal 20-year variability cycle in the North Atlantic in the model. This cycle involves the East Greenland Current intensity, and advection of active tracers along the subpolar gyre, which leads to an AMOC maximum around 15 years after the Mount Agung eruption. The 1997 maximum occurs approximately 20 years after the former one. The nudged simulations better reproduce this second maximum than the historical simulations. This is due to the initialisation of a cooling of the convection sites in the 1980s under the effect of a persistent North Atlantic oscillation (NAO) positive phase, a feature not captured in the historical simulations. Hence we argue that the 20-year cycle excited by the 1963 Mount Agung eruption together with the NAO forcing both contributed to the 1990s AMOC maximum. These results support the existence of a 20-year cycle in the North Atlantic in the observations. Hindcasts following the CMIP5 protocol are launched from a nudged simulation every 5 years for the 1960–2005 period. They exhibit significant correlation skill score as compared to an independent reconstruction of the AMOC from 4-year lead-time average. This encouraging result is accompanied by increased correlation skills in reproducing the observed 2-m air temperature in the bordering regions of the North Atlantic as compared to non-initialized simulations. To a lesser extent, predicted precipitation tends to correlate with the nudged simulation in the tropical Atlantic. We argue that this skill is due to the initialisation and predictability of the AMOC in the present prediction system. The mechanisms evidenced here support the idea of volcanic eruptions as a pacemaker for internal variability of the AMOC. Together with the existence of a 20-year cycle in the North Atlantic they propose a novel and complementary explanation for the AMOC variations over the last 50 years.
Resumo:
A weekly programme of water quality monitoring has been conducted by Slapton Ley Field Centre since 1970. Samples have been collected for the four main streams draining into Slapton Ley, from the Ley itself and from other sites within the catchment. On occasions, more frequent sampling has been undertaken during short-term research projects, usually in relation to nutrient export from the catchment. These water quality data, unparalleled in length for a series of small drainage basins in the British Isles, provide a unique resource for analysis of spatial and temporal variations in stream water quality within an agricultural area. Not surprisingly, given the eutrophic status of the Ley, most attention has focused on the nutrients nitrate and phosphate. A number of approaches to modelling nutrient loss have been attempted, including time series analysis and the application of nutrient export and physically-based models.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Resumo:
The goal of the Chemistry‐Climate Model Validation (CCMVal) activity is to improve understanding of chemistry‐climate models (CCMs) through process‐oriented evaluation and to provide reliable projections of stratospheric ozone and its impact on climate. An appreciation of the details of model formulations is essential for understanding how models respond to the changing external forcings of greenhouse gases and ozonedepleting substances, and hence for understanding the ozone and climate forecasts produced by the models participating in this activity. Here we introduce and review the models used for the second round (CCMVal‐2) of this intercomparison, regarding the implementation of chemical, transport, radiative, and dynamical processes in these models. In particular, we review the advantages and problems associated with approaches used to model processes of relevance to stratospheric dynamics and chemistry. Furthermore, we state the definitions of the reference simulations performed, and describe the forcing data used in these simulations. We identify some developments in chemistry‐climate modeling that make models more physically based or more comprehensive, including the introduction of an interactive ocean, online photolysis, troposphere‐stratosphere chemistry, and non‐orographic gravity‐wave deposition as linked to tropospheric convection. The relatively new developments indicate that stratospheric CCM modeling is becoming more consistent with our physically based understanding of the atmosphere.
Resumo:
The societal need for reliable climate predictions and a proper assessment of their uncertainties is pressing. Uncertainties arise not only from initial conditions and forcing scenarios, but also from model formulation. Here, we identify and document three broad classes of problems, each representing what we regard to be an outstanding challenge in the area of mathematics applied to the climate system. First, there is the problem of the development and evaluation of simple physically based models of the global climate. Second, there is the problem of the development and evaluation of the components of complex models such as general circulation models. Third, there is the problem of the development and evaluation of appropriate statistical frameworks. We discuss these problems in turn, emphasizing the recent progress made by the papers presented in this Theme Issue. Many pressing challenges in climate science require closer collaboration between climate scientists, mathematicians and statisticians. We hope the papers contained in this Theme Issue will act as inspiration for such collaborations and for setting future research directions.
Assessment of the Wind Gust Estimate Method in mesoscale modelling of storm events over West Germany
Resumo:
A physically based gust parameterisation is added to the atmospheric mesoscale model FOOT3DK to estimate wind gusts associated with storms over West Germany. The gust parameterisation follows the Wind Gust Estimate (WGE) method and its functionality is verified in this study. The method assumes that gusts occurring at the surface are induced by turbulent eddies in the planetary boundary layer, deflecting air parcels from higher levels down to the surface under suitable conditions. Model simulations are performed with horizontal resolutions of 20 km and 5 km. Ten historical storm events of different characteristics and intensities are chosen in order to include a wide range of typical storms affecting Central Europe. All simulated storms occurred between 1990 and 1998. The accuracy of the method is assessed objectively by validating the simulated wind gusts against data from 16 synoptic stations by means of “quality parameters”. Concerning these parameters, the temporal and spatial evolution of the simulated gusts is well reproduced. Simulated values for low altitude stations agree particularly well with the measured gusts. For orographically exposed locations, the gust speeds are partly underestimated. The absolute maximum gusts lie in most cases within the bounding interval given by the WGE method. Focussing on individual storms, the performance of the method is better for intense and large storms than for weaker ones. Particularly for weaker storms, the gusts are typically overestimated. The results for the sample of ten storms document that the method is generally applicable with the mesoscale model FOOT3DK for mid-latitude winter storms, even in areas with complex orography.
Resumo:
Quantitative simulations of the global-scale benefits of climate change mitigation are presented, using a harmonised, self-consistent approach based on a single set of climate change scenarios. The approach draws on a synthesis of output from both physically-based and economics-based models, and incorporates uncertainty analyses. Previous studies have projected global and regional climate change and its impacts over the 21st century but have generally focused on analysis of business-as-usual scenarios, with no explicit mitigation policy included. This study finds that both the economics-based and physically-based models indicate that early, stringent mitigation would avoid a large proportion of the impacts of climate change projected for the 2080s. However, it also shows that not all the impacts can now be avoided, so that adaptation would also therefore be needed to avoid some of the potential damage. Delay in mitigation substantially reduces the percentage of impacts that can be avoided, providing strong new quantitative evidence for the need for stringent and prompt global mitigation action on greenhouse gas emissions, combined with effective adaptation, if large, widespread climate change impacts are to be avoided. Energy technology models suggest that such stringent and prompt mitigation action is technologically feasible, although the estimated costs vary depending on the specific modelling approach and assumptions.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
The effect of diurnal variations in sea surface temperature (SST) on the air-sea flux of CO2 over the central Atlantic ocean and Mediterranean Sea (60 S–60 N, 60 W–45 E) is evaluated for 2005–2006. We use high spatial resolution hourly satellite ocean skin temperature data to determine the diurnal warming (ΔSST). The CO2 flux is then computed using three different temperature fields – a foundation temperature (Tf, measured at a depth where there is no diurnal variation), Tf, plus the hourly ΔSST and Tf, plus the monthly average of the ΔSSTs. This is done in conjunction with a physically-based parameterisation for the gas transfer velocity (NOAA-COARE). The differences between the fluxes evaluated for these three different temperature fields quantify the effects of both diurnal warming and diurnal covariations. We find that including diurnal warming increases the CO2 flux out of this region of the Atlantic for 2005–2006 from 9.6 Tg C a−1 to 30.4 Tg C a−1 (hourly ΔSST) and 31.2 Tg C a−1 (monthly average of ΔSST measurements). Diurnal warming in this region, therefore, has a large impact on the annual net CO2 flux but diurnal covariations are negligible. However, in this region of the Atlantic the uptake and outgassing of CO2 is approximately balanced over the annual cycle, so although we find diurnal warming has a very large effect here, the Atlantic as a whole is a very strong carbon sink (e.g. −920 Tg C a−1 Takahashi et al., 2002) making this is a small contribution to the Atlantic carbon budget.
Resumo:
Urbanization, the expansion of built-up areas, is an important yet less-studied aspect of land use/land cover change in climate science. To date, most global climate models used to evaluate effects of land use/land cover change on climate do not include an urban parameterization. Here, the authors describe the formulation and evaluation of a parameterization of urban areas that is incorporated into the Community Land Model, the land surface component of the Community Climate System Model. The model is designed to be simple enough to be compatible with structural and computational constraints of a land surface model coupled to a global climate model yet complex enough to explore physically based processes known to be important in determining urban climatology. The city representation is based upon the “urban canyon” concept, which consists of roofs, sunlit and shaded walls, and canyon floor. The canyon floor is divided into pervious (e.g., residential lawns, parks) and impervious (e.g., roads, parking lots, sidewalks) fractions. Trapping of longwave radiation by canyon surfaces and solar radiation absorption and reflection is determined by accounting for multiple reflections. Separate energy balances and surface temperatures are determined for each canyon facet. A one-dimensional heat conduction equation is solved numerically for a 10-layer column to determine conduction fluxes into and out of canyon surfaces. Model performance is evaluated against measured fluxes and temperatures from two urban sites. Results indicate the model does a reasonable job of simulating the energy balance of cities.
Resumo:
The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.
Resumo:
As weather and climate models move toward higher resolution, there is growing excitement about potential future improvements in the understanding and prediction of atmospheric convection and its interaction with larger-scale phenomena. A meeting in January 2013 in Dartington, Devon was convened to address the best way to maximise these improvements, specifically in a UK context but with international relevance. Specific recommendations included increased convective-scale observations, high-resolution virtual laboratories, and a system of parameterization test beds with a range of complexities. The main recommendation was to facilitate the development of physically based convective parameterizations that are scale-aware, non-local, non-equilibrium, and stochastic.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.