900 resultados para new keynesian models
Resumo:
Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.
Resumo:
The Richards equation has been widely used for simulating soil water movement. However, the take-up of agro-hydrological models using the basic theory of soil water flow for optimizing irrigation, fertilizer and pesticide practices is still low. This is partly due to the difficulties in obtaining accurate values for soil hydraulic properties at a field scale. Here, we use an inverse technique to deduce the effective soil hydraulic properties, based on measuring the changes in the distribution of soil water with depth in a fallow field over a long period, subject to natural rainfall and evaporation using a robust micro Genetic Algorithm. A new optimized function was constructed from the soil water contents at different depths, and the soil water at field capacity. The deduced soil water retention curve was approximately parallel but higher than that derived from published pedo-tranfer functions for a given soil pressure head. The water contents calculated from the deduced soil hydraulic properties were in good agreement with the measured values. The reliability of the deduced soil hydraulic properties was tested in reproducing data measured from an independent experiment on the same soil cropped with leek. The calculation of root water uptake took account for both soil water potential and root density distribution. Results show that the predictions of soil water contents at various depths agree fairly well with the measurements, indicating that the inverse analysis is an effective and reliable approach to estimate soil hydraulic properties, and thus permits the simulation of soil water dynamics in both cropped and fallow soils in the field accurately. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a preliminary assessment of the relative effects of rate of climate change (four Representative Concentration Pathways - RCPs), assumed future population (five Shared Socio-economic Pathways - SSPs), and pattern of climate change (19 CMIP5 climate models) on regional and global exposure to water resources stress and river flooding. Uncertainty in projected future impacts of climate change on exposure to water stress and river flooding is dominated by uncertainty in the projected spatial and seasonal pattern of change in climate. There is little clear difference in impact between RCP2.6, RCP4.5 and RCP6.0 in 2050, and between RCP4.5 and RCP6.0 in 2080. Impacts under RCP8.5 are greater than under the other RCPs in 2050 and 2080. For a given RCP, there is a difference in the absolute numbers of people exposed to increased water resources stress or increased river flood frequency between the five SSPs. With the ‘middle-of-the-road’ SSP2, climate change by 2050 would increase exposure to water resources stress for between approximately 920 and 3400 million people under the highest RCP, and increase exposure to river flood risk for between 100 and 580 million people. Under RCP2.6, exposure to increased water scarcity would be reduced in 2050 by 22-24%, compared to impacts under the RCP8.5, and exposure to increased flood frequency would be reduced by around 16%. The implications of climate change for actual future losses and adaptation depend not only on the numbers of people exposed to changes in risk, but also on the qualitative characteristics of future worlds as described in the different SSPs. The difference in ‘actual’ impact between SSPs will therefore be greater than the differences in numbers of people exposed to impact.
Resumo:
Diagnosing the climate of New Zealand from low-resolution General Circulation Models (GCMs) is notoriously difficult due to the interaction of the complex topography and the Southern Hemisphere (SH) mid-latitude westerly winds. Therefore, methods of downscaling synoptic scale model data for New Zealand are useful to help understand past climate. New Zealand also has a wealth of palaeoclimate-proxy data to which the downscaled model output can be compared, and to provide a qualitative method of assessing the capability of GCMs to represent, in this case, the climate 6000 yr ago in the Mid-Holocene. In this paper, a synoptic weather and climate regime classification system using Empirical Orthogonal Function (EOF) analysis of GCM and reanalysis data was used. The climate regimes are associated with surface air temperature and precipitation anomalies over New Zealand. From the analysis in this study, we find at 6000 BP that increased trough activity in summer and autumn led to increased precipitation, with an increased north-south pressure gradient ("zonal events") in winter and spring leading to drier conditions. Opposing effects of increased (decreased) temperature are also seen in spring (autumn) in the South Island, which are associated with the increased zonal (trough) events; however, the circulation induced changes in temperature are likely to have been of secondary importance to the insolation induced changes. Evidence from the palaeoclimate-proxy data suggests that the Mid-Holocene was characterized by increased westerly wind events in New Zealand, which agrees with the preference for trough and zonal regimes in the models.
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Resumo:
Environmental change research often relies on simplistic, static models of human behaviour in social-ecological systems. This limits understanding of how social-ecological change occurs. Integrative, process-based behavioural models, which include feedbacks between action, and social and ecological system structures and dynamics, can inform dynamic policy assessment in which decision making is internalised in the model. These models focus on dynamics rather than states. They stimulate new questions and foster interdisciplinarity between and within the natural and social sciences.
Resumo:
Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.
Resumo:
A series of inquiries and reports suggest considerable failings in the care provided to some patients in the NHS. Although the Bristol Inquiry report of 2001 led to the creation of many new regulatory bodies to supervise the NHS, they have never enjoyed consistent support from government and the Mid Staffordshire Inquiry in 2013 suggests they made little difference. Why do some parts of the NHS disregard patients’ interests and how we should we respond to the challenge? The following discusses the evolution of approaches to NHS governance through the Hippocratic, Managerial and Commercial models, and assesses their risks and benefits. Apart from the ethical imperative, the need for effective governance is driven both by the growth in information available to the public and the resources wasted by ineffective systems of care. Appropriate solutions depend on an understanding of the perverse incentives inherent in each model and the need for greater sensitivity to the voices of patients and the public.
Resumo:
The UK new-build housing sector is facing dual pressures to expand supply, whilst delivering against tougher planning and Building Regulation requirements; predominantly in the areas of sustainability. The sector is currently responding by significantly scaling up production and incorporating new technical solutions into new homes. This trajectory of up-scaling and technical innovation has been of research interest; but this research has primarily focus on the ‘upstream’ implications for house builders’ business models and standardised design templates. There has been little attention, though, to the potential ‘downstream’ implications of the ramping up of supply and the introduction of new technologies for build quality and defects. This paper contributes to our understanding of the ‘downstream’ implications through a synthesis of the current UK defect literature with respect to new-build housing. It is found that the prevailing emphasis in the literature is limited to the responsibility, pathology and statistical analysis of defects (and failures). The literature does not extend to how house builders individually and collectively, in practice, collect and learn from defects information. The paper concludes by describing an ongoing collaborative research programme with the National House Building Council (NHBC) to: (a) understand house builders’ localised defects analysis procedures, and their current knowledge feedback loops to inform risk management strategies; and, (b) building on this understanding, design and test action research interventions to develop new data capture, learning processes and systems to reduce targeted defects.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.