102 resultados para Earth body tide model
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
Measurements of body weight, total body water and total body potassium (40K) were made serially on three occasions during pregnancy and once post partum in 27 normal pregnant women. Skinfold thickness and fat cell diameter were also measured. A model of body composition was formulated to permit the estimation of changes in fat, lean tissue and water content of the maternal body. Total maternal body fat increased during pregnancy, reaching a peak towards the end of the second trimester before diminishing. Serial measurements of fat cell diameter showed poor correlation, whilst total body fat calculated from skinfold thickness correlated well with our estimated values for total body fat in pregnancy.
Resumo:
Two experiments examined imitation of lateralised body movement sequences presented at six viewing angles (0º, 60º, 120º, 180º, 240º, and 300º rotation relative to the participant’s body). Experiment 1 found that, when participants were instructed simply to ‘‘do what the model does’’, at all viewing angles they produced more actions using the same side of the body as the model (anatomical matches), than actions using the opposite side (anatomical non-matches). In Experiment 2 participants were instructed to produce either anatomical matches or anatomical non-matches of observed actions. When the model was viewed from behind (0º), the anatomically matching group were more accurate than the anatomically non-matching group, but the non-matching group was superior when the model faced the participant (180º and 240º). No reliable differences were observed between groups at 60º, 120º, and 300º. In combination, the results of Experiments 1 and 2 suggest that, when they are confronting a model, people choose to imitate the hard way; they attempt to match observed actions anatomically, in spite of the fact that anatomical matching is more subject to error than anatomical non-matching.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
We describe the HadGEM2 family of climate configurations of the Met Office Unified Model, MetUM. The concept of a model "family" comprises a range of specific model configurations incorporating different levels of complexity but with a common physical framework. The HadGEM2 family of configurations includes atmosphere and ocean components, with and without a vertical extension to include a well-resolved stratosphere, and an Earth-System (ES) component which includes dynamic vegetation, ocean biology and atmospheric chemistry. The HadGEM2 physical model includes improvements designed to address specific systematic errors encountered in the previous climate configuration, HadGEM1, namely Northern Hemisphere continental temperature biases and tropical sea surface temperature biases and poor variability. Targeting these biases was crucial in order that the ES configuration could represent important biogeochemical climate feedbacks. Detailed descriptions and evaluations of particular HadGEM2 family members are included in a number of other publications, and the discussion here is limited to a summary of the overall performance using a set of model metrics which compare the way in which the various configurations simulate present-day climate and its variability.
Resumo:
We review the sea-level and energy budgets together from 1961, using recent and updated estimates of all terms. From 1972 to 2008, the observed sea-level rise (1.8 ± 0.2 mm yr−1 from tide gauges alone and 2.1 ± 0.2 mm yr−1 from a combination of tide gauges and altimeter observations) agrees well with the sum of contributions (1.8 ± 0.4 mm yr−1) in magnitude and with both having similar increases in the rate of rise during the period. The largest contributions come from ocean thermal expansion (0.8 mm yr−1) and the melting of glaciers and ice caps (0.7 mm yr−1), with Greenland and Antarctica contributing about 0.4 mm yr−1. The cryospheric contributions increase through the period (particularly in the 1990s) but the thermosteric contribution increases less rapidly. We include an improved estimate of aquifer depletion (0.3 mm yr−1), partially offsetting the retention of water in dams and giving a total terrestrial storage contribution of −0.1 mm yr−1. Ocean warming (90% of the total of the Earth's energy increase) continues through to the end of the record, in agreement with continued greenhouse gas forcing. The aerosol forcing, inferred as a residual in the atmospheric energy balance, is estimated as −0.8 ± 0.4 W m−2 for the 1980s and early 1990s. It increases in the late 1990s, as is required for consistency with little surface warming over the last decade. This increase is likely at least partially related to substantial increases in aerosol emissions from developing nations and moderate volcanic activity
Assessing and understanding the impact of stratospheric dynamics and variability on the earth system
Resumo:
Advances in weather and climate research have demonstrated the role of the stratosphere in the Earth system across a wide range of temporal and spatial scales. Stratospheric ozone loss has been identified as a key driver of Southern Hemisphere tropospheric circulation trends, affecting ocean currents and carbon uptake, sea ice, and possibly even the Antarctic ice sheets. Stratospheric variability has also been shown to affect short term and seasonal forecasts, connecting the tropics and midlatitudes and guiding storm track dynamics. The two-way interactions between the stratosphere and the Earth system have motivated the World Climate Research Programme's (WCRP) Stratospheric Processes and Their Role in Climate (SPARC) DynVar activity to investigate the impact of stratospheric dynamics and variability on climate. This assessment will be made possible by two new multi-model datasets. First, roughly 10 models with a well resolved stratosphere are participating in the Coupled Model Intercomparison Project 5 (CMIP5), providing the first multi-model ensemble of climate simulations coupled from the stratopause to the sea floor. Second, the Stratosphere Historical Forecasting Project (SHFP) of WCRP's Climate Variability and predictability (CLIVAR) program is forming a multi-model set of seasonal hindcasts with stratosphere resolving models, revealing the impact of both stratospheric initial conditions and dynamics on intraseasonal prediction. The CMIP5 and SHFP model-data sets will offer an unprecedented opportunity to understand the role of the stratosphere in the natural and forced variability of the Earth system and to determine whether incorporating knowledge of the middle atmosphere improves seasonal forecasts and climate projections. Capsule New modeling efforts will provide unprecedented opportunities to harness our knowledge of the stratosphere to improve weather and climate prediction.
Resumo:
We present an approach for dealing with coarse-resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse-scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse-resolution observations with temporally sparse fine-resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero-order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.
Resumo:
The Metafor project has developed a common information model (CIM) using the ISO19100 series for- malism to describe numerical experiments carried out by the Earth system modelling community, the models they use, and the simulations that result. Here we describe the mechanism by which the CIM was developed, and its key properties. We introduce the conceptual and application ver- sions and the controlled vocabularies developed in the con- text of supporting the fifth Coupled Model Intercomparison Project (CMIP5). We describe how the CIM has been used in experiments to describe model coupling properties and de- scribe the near term expected evolution of the CIM.
Resumo:
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Resumo:
The latest Hadley Centre climate model, HadGEM2-ES, includes Earth system components such as interactive chemistry and eight species of tropospheric aerosols. It has been run for the period 1860–2100 in support of the fifth phase of the Climate Model Intercomparison Project (CMIP5). Anthropogenic aerosol emissions peak between 1980 and 2020, resulting in a present-day all-sky top of the atmosphere aerosol forcing of −1.6 and −1.4 W m−2 with and without ammonium nitrate aerosols, respectively, for the sum of direct and first indirect aerosol forcings. Aerosol forcing becomes significantly weaker in the 21st century, being weaker than −0.5 W m−2 in 2100 without nitrate. However, nitrate aerosols become the dominant species in Europe and Asia and decelerate the decrease in global mean aerosol forcing. Considering nitrate aerosols makes aerosol radiative forcing 2–4 times stronger by 2100 depending on the representative concentration pathway, although this impact is lessened when changes in the oxidation properties of the atmosphere are accounted for. Anthropogenic aerosol residence times increase in the future in spite of increased precipitation, as cloud cover and aerosol-cloud interactions decrease in tropical and midlatitude regions. Deposition of fossil fuel black carbon onto snow and ice surfaces peaks during the 20th century in the Arctic and Europe but keeps increasing in the Himalayas until the middle of the 21st century. Results presented here confirm the importance of aerosols in influencing the Earth's climate, albeit with a reduced impact in the future, and suggest that nitrate aerosols will partially replace sulphate aerosols to become an important anthropogenic species in the remainder of the 21st century.
Resumo:
Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.