141 resultados para FGGE-Equator ´79 - First GARP Global Experiment
Resumo:
Atmospheric electricity measurements were made at Lerwick Observatory in the Shetland Isles (60°09′N, 1°08′W) during most of the 20th century. The Potential Gradient (PG) was measured from 1926 to 84 and the air-earth conduction current (Jc) was measured during the final decade of the PG measurements. Daily Jc values (1978–1984) observed at 15 UT are presented here for the first time, with independently-obtained PG measurements used to select valid data. The 15 UT Jc (1978–1984) spans 0.5–9.5 pA/m2, with median 2.5 pA/m2; the columnar resistance at Lerwick is estimated as 70 PΩm2. Smoke measurements confirm the low pollution properties of the site. Analysis of the monthly variation of Lerwick Jc data shows that winter (DJF) Jc is significantly greater than the summer (JJA) Jc by 20%. The Lerwick atmospheric electricity seasonality differs from the global lightning seasonality, but Jc has a similar seasonal phasing to that observed in Nimbostratus clouds globally, suggesting a role for non-thunderstorm rain clouds in the seasonality of the global circuit.
Resumo:
This paper reports on a new satellite sensor, the Geostationary Earth Radiation Budget (GERB) experiment. GERB is designed to make the first measurements of the Earth's radiation budget from geostationary orbit. Measurements at high absolute accuracy of the reflected sunlight from the Earth, and the thermal radiation emitted by the Earth are made every 15 min, with a spatial resolution at the subsatellite point of 44.6 km (north–south) by 39.3 km (east–west). With knowledge of the incoming solar constant, this gives the primary forcing and response components of the top-of-atmosphere radiation. The first GERB instrument is an instrument of opportunity on Meteosat-8, a new spin-stabilized spacecraft platform also carrying the Spinning Enhanced Visible and Infrared (SEVIRI) sensor, which is currently positioned over the equator at 3.5°W. This overview of the project includes a description of the instrument design and its preflight and in-flight calibration. An evaluation of the instrument performance after its first year in orbit, including comparisons with data from the Clouds and the Earth's Radiant Energy System (CERES) satellite sensors and with output from numerical models, are also presented. After a brief summary of the data processing system and data products, some of the scientific studies that are being undertaken using these early data are described. This marks the beginning of a decade or more of observations from GERB, as subsequent models will fly on each of the four Meteosat Second Generation satellites.
Resumo:
Under global warming, the predicted intensification of the global freshwater cycle will modify the net freshwater flux at the ocean surface. Since the freshwater flux maintains ocean salinity structures, changes to the density-driven ocean circulation are likely. A modified ocean circulation could further alter the climate, potentially allowing rapid changes, as seen in the past. The relevant feedback mechanisms and timescales are poorly understood in detail, however, especially at low latitudes where the effects of salinity are relatively subtle. In an attempt to resolve some of these outstanding issues, we present an investigation of the climate response of the low-latitude Pacific region to changes in freshwater forcing. Initiated from the present-day thermohaline structure, a control run of a coupled ocean-atmosphere general circulation model is compared with a perturbation run in which the net freshwater flux is prescribed to be zero over the ocean. Such an extreme experiment helps to elucidate the general adjustment mechanisms and their timescales. The atmospheric greenhouse gas concentrations are held constant, and we restrict our attention to the adjustment of the upper 1,000 m of the Pacific Ocean between 40°N and 40°S, over 100 years. In the perturbation run, changes to the surface buoyancy, near-surface vertical mixing and mixed-layer depth are established within 1 year. Subsequently, relative to the control run, the surface of the low-latitude Pacific Ocean in the perturbation run warms by an average of 0.6°C, and the interior cools by up to 1.1°C, after a few decades. This vertical re-arrangement of the ocean heat content is shown to be achieved by a gradual shutdown of the heat flux due to isopycnal (i.e. along surfaces of constant density) mixing, the vertical component of which is downwards at low latitudes. This heat transfer depends crucially upon the existence of density-compensating temperature and salinity gradients on isopycnal surfaces. The timescale of the thermal changes in the perturbation run is therefore set by the timescale for the decay of isopycnal salinity gradients in response to the eliminated freshwater forcing, which we demonstrate to be around 10-20 years. Such isopycnal heat flux changes may play a role in the response of the low-latitude climate to a future accelerated freshwater cycle. Specifically, the mechanism appears to represent a weak negative sea surface temperature feedback, which we speculate might partially shield from view the anthropogenically-forced global warming signal at low latitudes. Furthermore, since the surface freshwater flux is shown to play a role in determining the ocean's thermal structure, it follows that evaporation and/or precipitation biases in general circulation models are likely to cause sea surface temperature biases.
Resumo:
The impact of targeted sonde observations on the 1-3 day forecasts for northern Europe is evaluated using the Met Office four-dimensional variational data assimilation scheme and a 24 km gridlength limited-area version of the Unified Model (MetUM). The targeted observations were carried out during February and March 2007 as part of the Greenland Flow Distortion Experiment, using a research aircraft based in Iceland. Sensitive area predictions using either total energy singular vectors or an ensemble transform Kalman filter were used to predict where additional observations should be made to reduce errors in the initial conditions of forecasts for northern Europe. Targeted sonde data was assimilated operationally into the MetUM. Hindcasts show that the impact of the sondes was mixed. Only two out of the five cases showed clear forecast improvement; the maximum forecast improvement seen over the verifying region was approximately 5% of the forecast error 24 hours into the forecast. These two cases are presented in more detail: in the first the improvement propagates into the verification region with a developing polar low; and in the second the improvement is associated with an upper-level trough. The impact of cycling targeted data in the background of the forecast (including the memory of previous targeted observations) is investigated. This is shown to cause a greater forecast impact, but does not necessarily lead to a greater forecast improvement. Finally, the robustness of the results is assessed using a small ensemble of forecasts.
Resumo:
The Global Ocean Data Assimilation Experiment (GODAE [http:// www.godae.org]) has spanned a decade of rapid technological development. The ever-increasing volume and diversity of oceanographic data produced by in situ instruments, remote-sensing platforms, and computer simulations have driven the development of a number of innovative technologies that are essential for connecting scientists with the data that they need. This paper gives an overview of the technologies that have been developed and applied in the course of GODAE, which now provide users of oceanographic data with the capability to discover, evaluate, visualize, download, and analyze data from all over the world. The key to this capability is the ability to reduce the inherent complexity of oceanographic data by providing a consistent, harmonized view of the various data products. The challenges of data serving have been addressed over the last 10 years through the cooperative skills and energies of many individuals.
Resumo:
This paper aims to summarise the current performance of ozone data assimilation (DA) systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP) systems based on general circulation models (GCMs); the other five use chemistry transport models (CTMs). The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion) or the system (CTM or NWP system). Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa), some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry. These results indicate that when good observations are available for assimilation, the first priority for improving ozone DA systems is to improve the models. The analyses benefit strongly from the good quality of the MIPAS ozone observations. Using the analyses as a transfer standard, it is seen that MIPAS is similar to 5% higher than HALOE (Halogen Occultation Experiment) in the mid and upper stratosphere and mesosphere (above 30 hPa), and of order 10% higher than ozonesonde and HALOE in the lower stratosphere (100 hPa to 30 hPa). Analyses based on SCIAMACHY total column are almost as good as the MIPAS analyses; analyses based on SCIAMACHY limb profiles are worse in some areas, due to problems in the SCIAMACHY retrievals.
Resumo:
Simulations of the last 500 yr carried out using the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3) with anthropogenic and natural (solar and volcanic) forcings have been analyzed. Global-mean surface temperature change during the twentieth century is well reproduced. Simulated contributions to global-mean sea level rise during recent decades due to thermal expansion (the largest term) and to mass loss from glaciers and ice caps agree within uncertainties with observational estimates of these terms, but their sum falls short of the observed rate of sea level rise. This discrepancy has been discussed by previous authors; a completely satisfactory explanation of twentieth-century sea level rise is lacking. The model suggests that the apparent onset of sea level rise and glacier retreat during the first part of the nineteenth century was due to natural forcing. The rate of sea level rise was larger during the twentieth century than during the previous centuries because of anthropogenic forcing, but decreasing natural forcing during the second half of the twentieth century tended to offset the anthropogenic acceleration in the rate. Volcanic eruptions cause rapid falls in sea level, followed by recovery over several decades. The model shows substantially less decadal variability in sea level and its thermal expansion component than twentieth-century observations indicate, either because it does not generate sufficient ocean internal variability, or because the observational analyses overestimate the variability.
Resumo:
(from author) One of the first papers in the peer-review literature to discuss an OSSE to evaluate future wind observations in the stratosphere. Provides key evidence to justify the construction of the SWIFT instrument (currently planned to be built by the Canadian Space Agency for launch on ~ 2010).
Resumo:
A large ensemble of general circulation model (GCM) integrations coupled to a fully interactive sulfur cycle scheme were run on the climateprediction.net platform to investigate the uncertainty in the climate response to sulfate aerosol and carbon dioxide (CO2) forcing. The sulfate burden within the model (and the atmosphere) depends on the balance between formation processes and deposition (wet and dry). The wet removal processes for sulfate aerosol are much faster than dry removal and so any changes in atmospheric circulation, cloud cover, and precipitation will feed back on the sulfate burden. When CO2 is doubled in the Hadley Centre Slab Ocean Model (HadSM3), global mean precipitation increased by 5%; however, the global mean sulfate burden increased by 10%. Despite the global mean increase in precipitation, there were large areas of the model showing decreases in precipitation (and cloud cover) in the Northern Hemisphere during June–August, which reduced wet deposition and allowed the sulfate burden to increase. Further experiments were also undertaken with and without doubling CO2 while including a future anthropogenic sulfur emissions scenario. Doubling CO2 further enhanced the increases in sulfate burden associated with increased anthropogenic sulfur emissions as observed in the doubled CO2-only experiment. The implications are that the climate response to doubling CO2 can influence the amount of sulfate within the atmosphere and, despite increases in global mean precipitation, may act to increase it.
Resumo:
Despite the success of studies attempting to integrate remotely sensed data and flood modelling and the need to provide near-real time data routinely on a global scale as well as setting up online data archives, there is to date a lack of spatially and temporally distributed hydraulic parameters to support ongoing efforts in modelling. Therefore, the objective of this project is to provide a global evaluation and benchmark data set of floodplain water stages with uncertainties and assimilation in a large scale flood model using space-borne radar imagery. An algorithm is developed for automated retrieval of water stages with uncertainties from a sequence of radar imagery and data are assimilated in a flood model using the Tewkesbury 2007 flood event as a feasibility study. The retrieval method that we employ is based on possibility theory which is an extension of fuzzy sets and that encompasses probability theory. In our case we first attempt to identify main sources of uncertainty in the retrieval of water stages from radar imagery for which we define physically meaningful ranges of parameter values. Possibilities of values are then computed for each parameter using a triangular ‘membership’ function. This procedure allows the computation of possible values of water stages at maximum flood extents along a river at many different locations. At a later stage in the project these data are then used in assimilation, calibration or validation of a flood model. The application is subsequently extended to a global scale using wide swath radar imagery and a simple global flood forecasting model thereby providing improved river discharge estimates to update the latter.
Resumo:
The El Niño–Southern Oscillation (ENSO) is a naturally occurring fluctuation that originates in the tropical Pacific region and affects ecosystems, agriculture, freshwater supplies, hurricanes and other severe weather events worldwide. Under the influence of global warming, the mean climate of the Pacific region will probably undergo significant changes. The tropical easterly trade winds are expected to weaken; surface ocean temperatures are expected to warm fastest near the equator and more slowly farther away; the equatorial thermocline that marks the transition between the wind-mixed upper ocean and deeper layers is expected to shoal; and the temperature gradients across the thermocline are expected to become steeper. Year-to-year ENSO variability is controlled by a delicate balance of amplifying and damping feedbacks, and one or more of the physical processes that are responsible for determining the characteristics of ENSO will probably be modified by climate change. Therefore, despite considerable progress in our understanding of the impact of climate change on many of the processes that contribute to El Niño variability, it is not yet possible to say whether ENSO activity will be enhanced or damped, or if the frequency of events will change.
Resumo:
This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity.
Resumo:
Given the growing impact of human activities on the sea, managers are increasingly turning to marine protected areas (MPAs) to protect marine habitats and species. Many MPAs have been unsuccessful, however, and lack of income has been identified as a primary reason for failure. In this study, data from a global survey of 79 MPAs in 36 countries were analysed and attempts made to construct predictive models to determine the income requirements of any given MPA. Statistical tests were used to uncover possible patterns and relationships in the data, with two basic approaches. In the first of these, an attempt was made to build an explanatory "bottom-up" model of the cost structures that might be required to pursue various management activities. This proved difficult in practice owing to the very broad range of applicable data, spanning many orders of magnitude. In the second approach, a "top-down" regression model was constructed using logarithms of the base data, in order to address the breadth of the data ranges. This approach suggested that MPA size and visitor numbers together explained 46% of the minimum income requirements (P < 0.001), with area being the slightly more influential factor. The significance of area to income requirements was of little surprise, given its profile in the literature. However, the relationship between visitors and income requirements might go some way to explaining why northern hemisphere MPAs with apparently high incomes still claim to be under-funded. The relationship between running costs and visitor numbers has important implications not only in determining a realistic level of funding for MPAs, but also in assessing from where funding might be obtained. Since a substantial proportion of the income of many MPAs appears to be utilized for amenity purposes, a case may be made for funds to be provided from the typically better resourced government social and educational budgets as well as environmental budgets. Similarly visitor fees, already an important source of funding for some MPAs, might have a broader role to play in how MPAs are financed in the future. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
An experiment was designed to test the response of growing pullets to two changes in photoperiod (an increase from 8 to 14 h followed 5 weeks later by the reverse change, or a decrease from 14 to 8 h followed by an increase). The first change was made either at 35 days or at 56 days of age, to test the influence of age on the responses observed. Control groups were kept oil constant 8-h and constant 14-h photoperiods and the responses to appropriate single changes were also tested. Mean age at first egg varied from 111 days for birds given a single increment at 56 days to 166 days for pullets given an increase in photoperiod at 35 days followed by a reduction at 70 days. Responses to the single changes confirmed earlier reports that sensitivity to change in photoperiod varies with age ill a manner that is quantitatively predictable. Responses to the double changes could be explained by Postulating that the initial change altered the 'physiological age' of the bird to all extent that was also quantitatively predictable. An early increase in photoperiod advances sexual development and makes the bird more sensitive to a subsequent decrease than would be expected by reference to its chronological age. An early decrease in photoperiod delays sexual development, which can have the effect of making the bird more or less sensitive to a subsequent increase since, ill layer-strain pullets, sensitivity to an increment in photoperiod normally increases Lip to about 9 weeks of age but decreases thereafter. Mean age at first egg predicted using these concepts was very highly correlated with observed age at first egg. The results provide a rational basis for constructing a model to predict age at first egg for any combination of increases and decreases in photoperiod applied to growing pullets.