67 resultados para repository, process model, version, storage


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is much evidence that El Niño and La Niña lead to significant atmospheric seasonal predictability across much of the globe. However, despite successful predictions of tropical Pacific SSTs, atmospheric seasonal forecasts have had limited success. This study investigates model errors in the Hadley Centre Atmospheric Model version 3 (HadAM3) by analyzing composites of similar El Niño and La Niña events at their peak in December–January–February (DJF) and through their decay in March–April–May (MAM). The large-scale, tropical ENSO teleconnections are modeled accurately by HadAM3 during DJF but the strongest extratropical teleconnection, that in the North Pacific during winter, is modeled inaccurately. The Aleutian low is frequently observed to shift eastward during El Niño but the modeled response always consists of a deepening of the low without a shift. This is traced to small errors in the sensitivity of precipitation to SST in the tropical Pacific, which does not display enough variability so that the precipitation is always too high over the warmest SSTs. This error is reduced when vertical resolution is increased from 19 to 30 levels but enhanced horizontal resolution does not improve the simulation further. In MAM, following the peak of an El Niño or La Niña, atmospheric anomalies are observed to decay rapidly. The modeled ENSO response in DJF persists into MAM, making the extratropical anomalies in MAM too strong. This inaccuracy is again likely to be due to the high modeled sensitivity of tropical Pacific precipitation to SST, which is not significantly improved with enhanced vertical or horizontal resolution in MAM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transport of stratospheric air into the troposphere within deep convection was investigated using the Met Office Unified Model version 6.1. Three cases were simulated in which convective systems formed over the UK in the summer of 2005. For each of these three cases, simulations were performed on a grid having 4 km horizontal grid spacing in which the convection was parameterized and on a grid having 1 km horizontal grid spacing, which permitted explicit representation of the largest energy-containing scales of deep convection. Cross-tropopause transport was diagnosed using passive tracers that were initialized above the dynamically defined tropopause (2 potential vorticity unit surface) with a mixing ratio of 1. Although the synoptic-scale environment and triggering mechanisms varied between the cases, the total simulated transport was similar in all three cases. The total stratosphere-to-troposphere transport over the lifetime of the convective systems ranged from 25 to 100 kg/m2 across the simulated convective systems and resolutions, which corresponds to ∼5–20% of the total mass located within a stratospheric column extending 2 km above the tropopause. In all simulations, the transport into the lower troposphere (defined as below 3.5 km elevation) accounted for ∼1% of the total transport across the tropopause. In the 4 km runs most of the transport was due to parameterized convection, whereas in the 1 km runs the transport was due to explicitly resolved convection. The largest difference between the simulations with different resolutions occurred in the one case of midlevel convection considered, in which the total transport in the 1 km grid spacing simulation with explicit convection was 4 times that in the 4 km grid spacing simulation with parameterized convection. Although the total cross-tropopause transport was similar, stratospheric tracer was deposited more deeply to near-surface elevations in the convection-parameterizing simulations than in convection-permitting simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most commercially available predictive control packages, there is a separation between economic optimisation and predictive control, although both algorithms may be part of the same software system. This method is compared in this article with two alternative approaches where the economic objectives are directly included in the predictive control algorithm. Simulations are carried out using the Tennessee Eastman process model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hourly winter weather of the Last Glacial Maximum (LGM) is simulated using the Community Climate Model version 3 (CCM3) on a globally resolved T170 (75 km) grid. Results are compared to a longer LGM climatological run with the same boundary conditions and monthly saves. Hourly-scale animations are used to enhance interpretations. The purpose of the study is to explore whether additional insights into ice age conditions can be gleaned by going beyond the standard employment of monthly average model statistics to infer ice age weather and climate. Results for both LGM runs indicate a decrease in North Atlantic and increase in North Pacific cyclogenesis. Storm trajectories react to the mechanical forcing of the Laurentide Ice Sheet, with Pacific storms tracking over middle Alaska and northern Canada, terminating in the Labrador Sea. This result is coincident with other model results in also showing a significant reduction in Greenland wintertime precipitation – a response supported by ice core evidence. Higher-temporal resolution puts in sharper focus the close tracking of Pacific storms along the west coast of North America. This response is consistent with increased poleward heat transport in the LGM climatological run and could help explain “early” glacial warming inferred in this region from proxy climate records. Additional analyses shows a large increase in central Asian surface gustiness that support observational inferences that upper-level winds associated with Asian- Pacific storms transported Asian dust to Greenland during the LGM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying the prime drivers of the twentieth-century multidecadal variability in the Atlantic Ocean is crucial for predicting how the Atlantic will evolve in the coming decades and the resulting broad impacts on weather and precipitation patterns around the globe. Recently, Booth et al. showed that the Hadley Centre Global Environmental Model, version 2, Earth system configuration (HadGEM2-ES) closely reproduces the observed multidecadal variations of area-averaged North Atlantic sea surface temperature in the twentieth century. The multidecadal variations simulated in HadGEM2-ES are primarily driven by aerosol indirect effects that modify net surface shortwave radiation. On the basis of these results, Booth et al. concluded that aerosols are a prime driver of twentieth-century North Atlantic climate variability. However, here it is shown that there are major discrepancies between the HadGEM2-ES simulations and observations in the North Atlantic upper-ocean heat content, in the spatial pattern of multidecadal SST changes within and outside the North Atlantic, and in the subpolar North Atlantic sea surface salinity. These discrepancies may be strongly influenced by, and indeed in large part caused by, aerosol effects. It is also shown that the aerosol effects simulated in HadGEM2-ES cannot account for the observed anticorrelation between detrended multidecadal surface and subsurface temperature variations in the tropical North Atlantic. These discrepancies cast considerable doubt on the claim that aerosol forcing drives the bulk of this multidecadal variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under particular large-scale atmospheric conditions, several windstorms may affect Europe within a short time period. The occurrence of such cyclone families leads to large socioeconomic impacts and cumulative losses. The serial clustering of windstorms is analyzed for the North Atlantic/western Europe. Clustering is quantified as the dispersion (ratio variance/mean) of cyclone passages over a certain area. Dispersion statistics are derived for three reanalysis data sets and a 20-run European Centre Hamburg Version 5 /Max Planck Institute Version–Ocean Model Version 1 global climate model (ECHAM5/MPI-OM1 GCM) ensemble. The dependence of the seriality on cyclone intensity is analyzed. Confirming previous studies, serial clustering is identified in reanalysis data sets primarily on both flanks and downstream regions of the North Atlantic storm track. This pattern is a robust feature in the reanalysis data sets. For the whole area, extreme cyclones cluster more than nonextreme cyclones. The ECHAM5/MPI-OM1 GCM is generally able to reproduce the spatial patterns of clustering under recent climate conditions, but some biases are identified. Under future climate conditions (A1B scenario), the GCM ensemble indicates that serial clustering may decrease over the North Atlantic storm track area and parts of western Europe. This decrease is associated with an extension of the polar jet toward Europe, which implies a tendency to a more regular occurrence of cyclones over parts of the North Atlantic Basin poleward of 50°N and western Europe. An increase of clustering of cyclones is projected south of Newfoundland. The detected shifts imply a change in the risk of occurrence of cumulative events over Europe under future climate conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on a set of paleoclimate simulations for 21, 16, 14, 11 and 6 ka (thousands of years ago) carried out with the Community Climate Model, Version 1 (CCM1) of the National Center for Atmospheric Research (NCAR). This climate model uses four interactive components that were not available in our previous simulations with the NCAR CCM0 (COHMAP, 1988Science, 241, 1043–1052; Wright et al., 1993Global Climate Since the Last Glocial Maximum, University of Minnesota Press, MN): soil moisture, snow hydrology, sea-ice, and mixed-layer ocean temperature. The new simulations also use new estimates of ice sheet height and size from ( Peltier 1994, Science, 265, 195–201), and synchronize the astronomically dated orbital forcing with the ice sheet and atmospheric CO2 levels corrected from radiocarbon years to calendar years. The CCM1 simulations agree with the previous simulations in their most general characteristics. The 21 ka climate is cold and dry, in response to the presence of the ice sheets and lowered CO2 levels. The period 14–6 ka has strengthened northern summer monsoons and warm mid-latitude continental interiors in response to orbital changes. Regional differences between the CCM1 and CCM0 simulations can be traced to the effects of either the new interactive model components or the new boundary conditions. CCM1 simulates climate processes more realistically, but has additional degrees of freedom that can allow the model to ‘drift’ toward less realistic solutions in some instances. The CCM1 simulations are expressed in terms of equilibrium vegetation using BIOME 1, and indicate large shifts in biomes. Northern tundra and forest biomes are displaced southward at glacial maximum and subtropical deserts contract in the mid-Holocene when monsoons strengthen. These vegetation changes could, if simulated interactively, introduce additional climate feedbacks. The total area of vegetated land remains nearly constant through time because the exposure of continental shelves with lowered sea level largely compensates for the land covered by the expanded ice sheets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seasonal-to-interannual predictions of Arctic sea ice may be important for Arctic communities and industries alike. Previous studies have suggested that Arctic sea ice is potentially predictable but that the skill of predictions of the September extent minimum, initialized in early summer, may be low. The authors demonstrate that a melt season “predictability barrier” and two predictability reemergence mechanisms, suggested by a previous study, are robust features of five global climate models. Analysis of idealized predictions with one of these models [Hadley Centre Global Environment Model, version 1.2 (HadGEM1.2)], initialized in January, May and July, demonstrates that this predictability barrier exists in initialized forecasts as well. As a result, the skill of sea ice extent and volume forecasts are strongly start date dependent and those that are initialized in May lose skill much faster than those initialized in January or July. Thus, in an operational setting, initializing predictions of extent and volume in July has strong advantages for the prediction of the September minimum when compared to predictions initialized in May. Furthermore, a regional analysis of sea ice predictability indicates that extent is predictable for longer in the seasonal ice zones of the North Atlantic and North Pacific than in the regions dominated by perennial ice in the central Arctic and marginal seas. In a number of the Eurasian shelf seas, which are important for Arctic shipping, only the forecasts initialized in July have continuous skill during the first summer. In contrast, predictability of ice volume persists for over 2 yr in the central Arctic but less in other regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aerosol direct radiative effect (DRE) of African smoke was analyzed in cloud scenes over the southeast Atlantic Ocean, using Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite observations and Hadley Centre Global Environmental Model version 2 (HadGEM2) climate model simulations. The observed mean DRE was about 30–35 W m−2 in August and September 2006–2009. In some years, short episodes of high-aerosol DRE can be observed, due to high-aerosol loadings, while in other years the loadings are lower but more prolonged. Climate models that use evenly distributed monthly averaged emission fields will not reproduce these high-aerosol loadings. Furthermore, the simulated monthly mean aerosol DRE in HadGEM2 is only about 6 W m−2 in August. The difference with SCIAMACHY mean observations can be partly explained by an underestimation of the aerosol absorption Ångström exponent in the ultraviolet. However, the subsequent increase of aerosol DRE simulation by about 20% is not enough to explain the observed discrepancy between simulations and observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous climate model simulations have shown that the configuration of the Earth's orbit during the early to mid-Holocene (approximately 10–5 kyr) can account for the generally warmer-than-present conditions experienced by the high latitudes of the northern hemisphere. New simulations for 6 kyr with two atmospheric/mixed-layer ocean models (Community Climate Model, version 1, CCMl, and Global ENvironmental and Ecological Simulation of Interactive Systems, version 2, GENESIS 2) are presented here and compared with results from two previous simulations with GENESIS 1 that were obtained with and without the albedo feedback due to climate-induced poleward expansion of the boreal forest. The climate model results are summarized in the form of potential vegetation maps obtained with the global BIOME model, which facilitates visual comparisons both among models and with pollen and plant macrofossil data recording shifts of the forest-tundra boundary. A preliminary synthesis shows that the forest limit was shifted 100–200 km north in most sectors. Both CCMl and GENESIS 2 produced a shift of this magnitude. GENESIS 1 however produced too small a shift, except when the boreal forest albedo feedback was included. The feedback in this case was estimated to have amplified forest expansion by approximately 50%. The forest limit changes also show meridional patterns (greatest expansion in central Siberia and little or none in Alaska and Labrador) which have yet to be reproduced by models. Further progress in understanding of the processes involved in the response of climate and vegetation to orbital forcing will require both the deployment of coupled atmosphere-biosphere-ocean models and the development of more comprehensive observational data sets

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.