37 resultados para Physically-based animation
em CentAUR: Central Archive University of Reading - UK
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on the surface of sea ice from snow and sea ice melt and their presence reduces the albedo of the ice cover, leading to further melt. Toward the end of the melt season, melt ponds cover up to 50% of the sea ice surface. We have developed a melt pond evolution theory. Here, we have incorporated this melt pond theory into the Los Alamos CICE sea ice model, which has required us to include the refreezing of melt ponds. We present results showing that the presence, or otherwise, of a representation of melt ponds has a significant effect on the predicted sea ice thickness and extent. We also present a sensitivity study to uncertainty in the sea ice permeability, number of thickness categories in the model representation, meltwater redistribution scheme, and pond albedo. We conclude with a recommendation that our melt pond scheme is included in sea ice models, and the number of thickness categories should be increased and concentrated at lower thicknesses.
Resumo:
We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.
Resumo:
Remote sensing is the only practicable means to observe snow at large scales. Measurements from passive microwave instruments have been used to derive snow climatology since the late 1970’s, but the algorithms used were limited by the computational power of the era. Simplifications such as the assumption of constant snow properties enabled snow mass to be retrieved from the microwave measurements, but large errors arise from those assumptions, which are still used today. A better approach is to perform retrievals within a data assimilation framework, where a physically-based model of the snow properties can be used to produce the best estimate of the snow cover, in conjunction with multi-sensor observations such as the grain size, surface temperature, and microwave radiation. We have developed an existing snow model, SNOBAL, to incorporate mass and energy transfer of the soil, and to simulate the growth of the snow grains. An evaluation of this model is presented and techniques for the development of new retrieval systems are discussed.
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
In this paper we pledge that physically based equations should be combined with remote sensing techniques to enable a more theoretically rigorous estimation of area-average soil heat flux, G. A standard physical equation (i.e. the analytical or exact method) for the estimation of G, in combination with a simple, but theoretically derived, equation for soil thermal inertia (F), provides the basis for a more transparent and readily interpretable method for the estimation of G; without the requirement for in situ instrumentation. Moreover, such an approach ensures a more universally applicable method than those derived from purely empirical studies (employing vegetation indices and albedo, for example). Hence, a new equation for the estimation of Gamma(for homogeneous soils) is discussed in this paper which only requires knowledge of soil type, which is readily obtainable from extant soil databases and surveys, in combination with a coarse estimate of moisture status. This approach can be used to obtain area-averaged estimates of Gamma(and thus G, as explained in paper II) which is important for large-scale energy balance studies that employ aircraft or satellite data. Furthermore, this method also relaxes the instrumental demand for studies at the plot and field scale (no requirement for in situ soil temperature sensors, soil heat flux plates and/or thermal conductivity sensors). In addition, this equation can be incorporated in soil-vegetation-atmosphere-transfer models that use the force restore method to update surface temperatures (such as the well-known ISBA model), to replace the thermal inertia coefficient.
Resumo:
Models of snow processes in areas of possible large-scale change need to be site independent and physically based. Here, the accumulation and ablation of the seasonal snow cover beneath a fir canopy has been simulated with a new physically based snow-soil vegetation-atmosphere transfer scheme (Snow-SVAT) called SNOWCAN. The model was formulated by coupling a canopy optical and thermal radiation model to a physically based multilayer snow model. Simple representations of other forest effects were included. These include the reduction of wind speed and hence turbulent transfer beneath the canopy, sublimation of intercepted snow, and deposition of debris on the surface. This paper tests this new modeling approach fully at a fir site within Reynolds Creek Experimental Watershed, Idaho. Model parameters were determined at an open site and subsequently applied to the fir site. SNOWCAN was evaluated using measurements of snow depth, subcanopy solar and thermal radiation, and snowpack profiles of temperature, density, and grain size. Simulations showed good agreement with observations (e.g., fir site snow depth was estimated over the season with r(2) = 0.96), generally to within measurement error. However, the simulated temperature profiles were less accurate after a melt-freeze event, when the temperature discrepancy resulted from underestimation of the rate of liquid water flow and/or the rate of refreeze. This indicates both that the general modeling approach is applicable and that a still more complete representation of liquid water in the snowpack will be important.
Resumo:
A new snow-soil-vegetation-atmosphere transfer (Snow-SVAT) scheme, which simulates the accumulation and ablation of the snow cover beneath a forest canopy, is presented. The model was formulated by coupling a canopy optical and thermal radiation model to a physically-based multi-layer snow model. This canopy radiation model is physically-based yet requires few parameters, so can be used when extensive in-situ field measurements are not available. Other forest effects such as the reduction of wind speed, interception of snow on the canopy and the deposition of litter were incorporated within this combined model, SNOWCAN, which was tested with data taken as part of the Boreal Ecosystem-Atmosphere Study (BOREAS) international collaborative experiment. Snow depths beneath four different canopy types and at an open site were simulated. Agreement between observed and simulated snow depths was generally good, with correlation coefficients ranging between r^2=0.94 and r^2=0.98 for all sites where automatic measurements were available. However, the simulated date of total snowpack ablation generally occurred later than the observed date. A comparison between simulated solar radiation and limited measurements of sub-canopy radiation at one site indicates that the model simulates the sub-canopy downwelling solar radiation early in the season to within measurement uncertainty.
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
Six land surface models and five global hydrological models participate in a model intercomparison project (WaterMIP), which for the first time compares simulation results of these different classes of models in a consistent way. In this paper the simulation setup is described and aspects of the multi-model global terrestrial water balance are presented. All models were run at 0.5 degree spatial resolution for the global land areas for a 15-year period (1985-1999) using a newly-developed global meteorological dataset. Simulated global terrestrial evapotranspiration, excluding Greenland and Antarctica, ranges from 415 to 586 mm year-1 (60,000 to 85,000 km3 year-1) and simulated runoff ranges from 290 to 457 mm year-1 (42,000 to 66,000 km3 year-1). Both the mean and median runoff fractions for the land surface models are lower than those of the global hydrological models, although the range is wider. Significant simulation differences between land surface and global hydrological models are found to be caused by the snow scheme employed. The physically-based energy balance approach used by land surface models generally results in lower snow water equivalent values than the conceptual degree-day approach used by global hydrological models. Some differences in simulated runoff and evapotranspiration are explained by model parameterizations, although the processes included and parameterizations used are not distinct to either land surface models or global hydrological models. The results show that differences between model are major sources of uncertainty. Climate change impact studies thus need to use not only multiple climate models, but also some other measure of uncertainty, (e.g. multiple impact models).
Resumo:
This paper describes a method that employs Earth Observation (EO) data to calculate spatiotemporal estimates of soil heat flux, G, using a physically-based method (the Analytical Method). The method involves a harmonic analysis of land surface temperature (LST) data. It also requires an estimate of near-surface soil thermal inertia; this property depends on soil textural composition and varies as a function of soil moisture content. The EO data needed to drive the model equations, and the ground-based data required to provide verification of the method, were obtained over the Fakara domain within the African Monsoon Multidisciplinary Analysis (AMMA) program. LST estimates (3 km × 3 km, one image 15 min−1) were derived from MSG-SEVIRI data. Soil moisture estimates were obtained from ENVISAT-ASAR data, while estimates of leaf area index, LAI, (to calculate the effect of the canopy on G, largely due to radiation extinction) were obtained from SPOT-HRV images. The variation of these variables over the Fakara domain, and implications for values of G derived from them, were discussed. Results showed that this method provides reliable large-scale spatiotemporal estimates of G. Variations in G could largely be explained by the variability in the model input variables. Furthermore, it was shown that this method is relatively insensitive to model parameters related to the vegetation or soil texture. However, the strong sensitivity of thermal inertia to soil moisture content at low values of relative saturation (<0.2) means that in arid or semi-arid climates accurate estimates of surface soil moisture content are of utmost importance, if reliable estimates of G are to be obtained. This method has the potential to improve large-scale evaporation estimates, to aid land surface model prediction and to advance research that aims to explain failure in energy balance closure of meteorological field studies.
Resumo:
Interest in attributing the risk of damaging weather-related events to anthropogenic climate change is increasing1. Yet climate models used to study the attribution problem typically do not resolve the weather systems associated with damaging events2 such as the UK floods of October and November 2000. Occurring during the wettest autumn in England and Wales since records began in 17663, 4, these floods damaged nearly 10,000 properties across that region, disrupted services severely, and caused insured losses estimated at £1.3 billion (refs 5, 6). Although the flooding was deemed a ‘wake-up call’ to the impacts of climate change at the time7, such claims are typically supported only by general thermodynamic arguments that suggest increased extreme precipitation under global warming, but fail8, 9 to account fully for the complex hydrometeorology4, 10 associated with flooding. Here we present a multi-step, physically based ‘probabilistic event attribution’ framework showing that it is very likely that global anthropogenic greenhouse gas emissions substantially increased the risk of flood occurrence in England and Wales in autumn 2000. Using publicly volunteered distributed computing11, 12, we generate several thousand seasonal-forecast-resolution climate model simulations of autumn 2000 weather, both under realistic conditions, and under conditions as they might have been had these greenhouse gas emissions and the resulting large-scale warming never occurred. Results are fed into a precipitation-runoff model that is used to simulate severe daily river runoff events in England and Wales (proxy indicators of flood events). The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth-century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.
Resumo:
The mechanisms involved in Atlantic meridional overturning circulation (AMOC) decadal variability and predictability over the last 50 years are analysed in the IPSL–CM5A–LR model using historical and initialised simulations. The initialisation procedure only uses nudging towards sea surface temperature anomalies with a physically based restoring coefficient. When compared to two independent AMOC reconstructions, both the historical and nudged ensemble simulations exhibit skill at reproducing AMOC variations from 1977 onwards, and in particular two maxima occurring respectively around 1978 and 1997. We argue that one source of skill is related to the large Mount Agung volcanic eruption starting in 1963, which reset an internal 20-year variability cycle in the North Atlantic in the model. This cycle involves the East Greenland Current intensity, and advection of active tracers along the subpolar gyre, which leads to an AMOC maximum around 15 years after the Mount Agung eruption. The 1997 maximum occurs approximately 20 years after the former one. The nudged simulations better reproduce this second maximum than the historical simulations. This is due to the initialisation of a cooling of the convection sites in the 1980s under the effect of a persistent North Atlantic oscillation (NAO) positive phase, a feature not captured in the historical simulations. Hence we argue that the 20-year cycle excited by the 1963 Mount Agung eruption together with the NAO forcing both contributed to the 1990s AMOC maximum. These results support the existence of a 20-year cycle in the North Atlantic in the observations. Hindcasts following the CMIP5 protocol are launched from a nudged simulation every 5 years for the 1960–2005 period. They exhibit significant correlation skill score as compared to an independent reconstruction of the AMOC from 4-year lead-time average. This encouraging result is accompanied by increased correlation skills in reproducing the observed 2-m air temperature in the bordering regions of the North Atlantic as compared to non-initialized simulations. To a lesser extent, predicted precipitation tends to correlate with the nudged simulation in the tropical Atlantic. We argue that this skill is due to the initialisation and predictability of the AMOC in the present prediction system. The mechanisms evidenced here support the idea of volcanic eruptions as a pacemaker for internal variability of the AMOC. Together with the existence of a 20-year cycle in the North Atlantic they propose a novel and complementary explanation for the AMOC variations over the last 50 years.
Resumo:
A weekly programme of water quality monitoring has been conducted by Slapton Ley Field Centre since 1970. Samples have been collected for the four main streams draining into Slapton Ley, from the Ley itself and from other sites within the catchment. On occasions, more frequent sampling has been undertaken during short-term research projects, usually in relation to nutrient export from the catchment. These water quality data, unparalleled in length for a series of small drainage basins in the British Isles, provide a unique resource for analysis of spatial and temporal variations in stream water quality within an agricultural area. Not surprisingly, given the eutrophic status of the Ley, most attention has focused on the nutrients nitrate and phosphate. A number of approaches to modelling nutrient loss have been attempted, including time series analysis and the application of nutrient export and physically-based models.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.