892 resultados para Development Models, territory
Resumo:
The relationship between biases in Northern Hemisphere (NH) atmospheric blocking frequency and extratropical cyclone track density is investigated in 12 CMIP5 climate models to identify mechanisms underlying climate model biases and inform future model development. Biases in the Greenland blocking and summer Pacific blocking frequencies are associated with biases in the storm track latitudes while biases in winter European blocking frequency are related to the North Atlantic storm track tilt and Mediterranean cyclone density. However, biases in summer European and winter Pacific blocking appear less related with cyclone track density. Furthermore, the models with smaller biases in winter European blocking frequency have smaller biases in the cyclone density in Europe, which suggests that they are different aspects of the same bias. This is not found elsewhere in the NH. The summer North Atlantic and the North Pacific mean CMIP5 track density and blocking biases might therefore have different origins.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
There are three key components for developing a metadata system: a container structure laying out the key semantic issues of interest and their relationships; an extensible controlled vocabulary providing possible content; and tools to create and manipulate that content. While metadata systems must allow users to enter their own information, the use of a controlled vocabulary both imposes consistency of definition and ensures comparability of the objects described. Here we describe the controlled vocabulary (CV) and metadata creation tool built by the METAFOR project for use in the context of describing the climate models, simulations and experiments of the fifth Coupled Model Intercomparison Project (CMIP5). The CV and resulting tool chain introduced here is designed for extensibility and reuse and should find applicability in many more projects.
Resumo:
This large-scale study examined the development of time-based prospective memory (PM) across childhood and the roles that working memory updating and time monitoring play in driving age effects in PM performance. One hundred and ninety-seven children aged 5 to 14 years completed a time-based PM task where working memory updating load was manipulated within individuals using a dual task design. Results revealed age-related increases in PM performance across childhood. Working memory updating load had a negative impact on PM performance and monitoring behavior in older children, but this effect was smaller in younger children. Moreover, the frequency as well as the pattern of time monitoring predicted children’s PM performance. Our interpretation of these results is that processes involved in children’s PM may show a qualitative shift over development from simple, nonstrategic monitoring behavior to more strategic monitoring based on internal temporal models that rely specifically on working memory updating resources. We discuss this interpretation with regard to possible trade-off effects in younger children as well as alternative accounts.
Resumo:
We discuss substorm observations made near 2100 magnetic local time (MLT) on March 7, 1991, in a collaborative study involving data from the European Incoherent Scatter radar, all-sky camera data, and magnetometer data from the Tromsø Auroral Observatory, the U.K. Sub-Auroral Magnetometer Network (SAMNET) and the IMAGE magnetometer chain. We conclude that for the substorm studied a plasmoid was not pinched off until at least 10 min after onset at the local time of the observations (2100 MLT) and that the main substorm electrojet expanded westward over this local time 14 min after onset. In the late growth phase/early expansion phase, we observed southward drifting arcs probably moving faster than the background plasma. Similar southward moving arcs in the recovery phase moved at a speed which does not appear to be significantly different from the measured plasma flow speed. We discuss these data in terms of the “Kiruna conjecture” and classical “near-Earth neutral line” paradigms, since the data show features of both models of substorm development. We suggest that longitudinal variation in behavior may reconcile the differences between the two models in the case of this substorm.
Resumo:
High-resolution simulations over a large tropical domain (∼20◦S–20◦N and 42◦E–180◦E) using both explicit and parameterized convection are analyzed and compared during a 10-day case study of an active Madden-Julian Oscillation (MJO) event. In Part II, the moisture budgets and moist entropy budgets are analyzed. Vertical subgrid diabatic heating profiles and vertical velocity profiles are also compared; these are related to the horizontal and vertical advective components of the moist entropy budget which contribute to gross moist stability, GMS, and normalized GMS (NGMS). The 4-km model with explicit convection and good MJO performance has a vertical heating structure that increases with height in the lower troposphere in regions of strong convection (like observations), whereas the 12-km model with parameterized convection and a poor MJO does not show this relationship. The 4-km explicit convection model also has a more top-heavy heating profile for the troposphere as a whole near and to the west of the active MJO-related convection, unlike the 12-km parameterized convection model. The dependence of entropy advection components on moisture convergence is fairly weak in all models, and differences between models are not always related to MJO performance, making comparisons to previous work somewhat inconclusive. However, models with relatively good MJO strength and propagation have a slightly larger increase of the vertical advective component with increasing moisture convergence, and their NGMS vertical terms have more variability in time and longitude, with total NGMS that is comparatively larger to the west and smaller to the east.
Resumo:
This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.
Resumo:
This paper draws on a study of the politics of development planning in London’s South Bank to examine wider trends in the governance of contemporary cities. It assesses the impacts and outcomes of so-called new localist reforms and argues that we are witnessing two principal trends. First, governance processes are increasingly dominated by anti-democratic development machines, characterized by new assemblages of public- and private-sector experts. These machines reflect and reproduce a type of development politics in which there is a greater emphasis on a pragmatic realism and a politics of delivery. Second, the presence of these machines is having a significant impact on the politics of planning. Democratic engagement is not seen as the basis for new forms of localism and community control. Instead, it is presented as a potentially disruptive force that needs to be managed by a new breed of skilled private-sector consultant. The paper examines these wider shifts in urban politics before focusing on the connections between emerging development machines and local residential and business communities. It ends by highlighting some of the wider implications of change for democratic modes of engagement and nodes of resistance in urban politics.
Resumo:
The present study examines three competing models of morphosyntactic transfer in third language (L3) acquisition, examining the particular domain of the feature configuration of embedded T in L3 Brazilian Portuguese (BP) at the initial stages and then through development. The methodology alternates Spanish and English as the L1 and L2 to tease apart the source of transfer to L3 BP. Results from a scalar grammaticality acceptability task show unequivocal transfer of Spanish irrespective of Spanish’s status as an L1 or L2. The data thus support the Typological Primacy Model (Rothman 2010, 2011, 2013a, 2013b), which proposes that multilingual transfer is selected by factors related to comparative structural similarity. Given that Spanish transfer at the L3 initial stages creates the need for feature reconfiguration to converge on the target BP grammar, the second part of this chapter examines the developmental consequences of what the TPM models in cases of non-facilitative initial transfer, that is, the developmental path of feature reconfiguration of embedded T in L3 BP by English/Spanish bilinguals. Given what these data reveal, we address the role of regressive transfer as a correlate of L3 proficiency gains.
Resumo:
In 2007, FTO was identified as the first genome-wide association study (GWAS) gene associated with obesity in humans. Since then, various animal models have served to establish the mechanistic basis behind this association. Many earlier studies focussed on FTO’s effects on food intake via central mechanisms. Emerging evidence, however, implicates adipose tissue development and function in the causal relationship between perturbations in FTO expression and obesity. The purpose of this mini review is to shed light on these new studies of FTO function in adipose tissue and present a clearer picture of its impact on obesity susceptibility.
Resumo:
Human Body Thermoregulation Models have been widely used in the field of human physiology or thermal comfort studies. However there are few studies on the evaluation method for these models. This paper summarises the existing evaluation methods and critically analyses the flaws. Based on that, a method for the evaluating the accuracy of the Human Body Thermoregulation models is proposed. The new evaluation method contributes to the development of Human Body Thermoregulation models and validates their accuracy both statistically and empirically. The accuracy of different models can be compared by the new method. Furthermore, the new method is not only suitable for the evaluation of Human Body Thermoregulation Models, but also can be theoretically applied to the evaluation of the accuracy of the population-based models in other research fields.