145 resultados para Model-In-the-loop
Resumo:
The primary role of land surface models embedded in climate models is to partition surface available energy into upwards, radiative, sensible and latent heat fluxes. Partitioning of evapotranspiration, ET, is of fundamental importance: as a major component of the total surface latent heat flux, ET affects the simulated surface water balance, and related energy balance, and consequently the feedbacks with the atmosphere. In this context it is also crucial to credibly represent the CO2 exchange between ecosystems and their environment. In this study, JULES, the land surface model used in UK weather and climate models, has been evaluated for temperate Europe. Compared to eddy covariance flux measurements, the CO2 uptake by the ecosystem is underestimated and the ET overestimated. In addition, the contribution to ET from soil and intercepted water evaporation far outweighs the contribution of plant transpiration. To alleviate these biases, adaptations have been implemented in JULES, based on key literature references. These adaptations have improved the simulation of the spatio-temporal variability of the fluxes and the accuracy of the simulated GPP and ET, including its partitioning. This resulted in a shift of the seasonal soil moisture cycle. These adaptations are expected to increase the fidelity of climate simulations over Europe. Finally, the extreme summer of 2003 was used as evaluation benchmark for the use of the model in climate change studies. The improved model captures the impact of the 2003 drought on the carbon assimilation and the water use efficiency of the plants. It, however, underestimates the 2003 GPP anomalies. The simulations showed that a reduction of evaporation from the interception and soil reservoirs, albeit not of transpiration, largely explained the good correlation between the carbon and the water fluxes anomalies that was observed during 2003. This demonstrates the importance of being able to discriminate the response of individual component of the ET flux to environmental forcing.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
Persistent contrails are believed to currently have a relatively small but significant positive radiative forcing on climate. With air travel predicted to continue its rapid growth over the coming years, the contrail warming effect on climate is expected to increase. Nevertheless, there remains a high level of uncertainty in the current estimates of contrail radiative forcing. Contrail formation depends mostly on the aircraft flying in cold and moist enough air masses. Most studies to date have relied on simple parameterizations using averaged meteorological conditions. In this paper we take into account the short‐term variability in background cloudiness by developing an on‐line contrail parameterization for the UK Met Office climate model. With this parameterization, we estimate that for the air traffic of year 2002 the global mean annual linear contrail coverage was approximately 0.11%. Assuming a global mean contrail optical depth of 0.2 or smaller and assuming hexagonal ice crystals, the corresponding contrail radiative forcing was calculated to be less than 10 mW m−2 in all‐sky conditions. We find that the natural cloud masking effect on contrails may be significantly higher than previously believed. This new result is explained by the fact that contrails seem to preferentially form in cloudy conditions, which ameliorates their overall climate impact by approximately 40%.
Resumo:
The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.
Resumo:
This article elucidates the Typological Primacy Model (TPM; Rothman, 2010, 2011, 2013) for the initial stages of adult third language (L3) morphosyntactic transfer, addressing questions that stem from the model and its application. The TPM maintains that structural proximity between the L3 and the L1 and/or the L2 determines L3 transfer. In addition to demonstrating empirical support for the TPM, this article articulates a proposal for how the mind unconsciously determines typological (structural) proximity based on linguistic cues from the L3 input stream used by the parser early on to determine holistic transfer of one previous (the L1 or the L2) system. This articulated version of the TPM is motivated by argumentation appealing to cognitive and linguistic factors. Finally, in line with the general tenets of the TPM, I ponder if and why L3 transfer might obtain differently depending on the type of bilingual (e.g. early vs. late) and proficiency level of bilingualism involved in the L3 process.
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
Resumo:
The Eyjafjallajökull volcano in Iceland emitted a cloud of ash into the atmosphere during April and May 2010. Over the UK the ash cloud was observed by the FAAM BAe-146 Atmospheric Research Aircraft which was equipped with in-situ probes measuring the concentration of volcanic ash carried by particles of varying sizes. The UK Met Office Numerical Atmospheric-dispersion Modelling Environment (NAME) has been used to simulate the evolution of the ash cloud emitted by the Eyjafjallajökull volcano during the period 4–18 May 2010. In the NAME simulations the processes controlling the evolution of the concentration and particle size distribution include sedimentation and deposition of particles, horizontal dispersion and vertical wind shear. For travel times between 24 and 72 h, a 1/t relationship describes the evolution of the concentration at the centre of the ash cloud and the particle size distribution remains fairly constant. Although NAME does not represent the effects of microphysical processes, it can capture the observed decrease in concentration with travel time in this period. This suggests that, for this eruption, microphysical processes play a small role in determining the evolution of the distal ash cloud. Quantitative comparison with observations shows that NAME can simulate the observed column-integrated mass if around 4% of the total emitted mass is assumed to be transported as far as the UK by small particles (< 30 μm diameter). NAME can also simulate the observed particle size distribution if a distal particle size distribution that contains a large fraction of < 10 μm diameter particles is used, consistent with the idea that phraetomagmatic volcanoes, such as Eyjafjallajökull, emit very fine particles.
Resumo:
The role of air–sea coupling in the simulation of the Madden–Julian oscillation (MJO) is explored using two configurations of the Hadley Centre atmospheric model (AGCM), GA3.0, which differ only in F, a parameter controlling convective entrainment and detrainment. Increasing F considerably improves deficient MJO-like variability in the Indian and Pacific Oceans, but variability in and propagation through the Maritime Continent remains weak. By coupling GA3.0 in the tropical Indo-Pacific to a boundary-layer ocean model, KPP, and employing climatological temperature corrections, well resolved air–sea interactions are simulated with limited alterations to the mean state. At default F, when GA3.0 has a poor MJO, coupling produces a stronger MJO with some eastward propagation, although both aspects remain deficient. These results agree with previous sensitivity studies using AGCMs with poor variability. At higher F, coupling does not affect MJO amplitude but enhances propagation through the Maritime Continent, resulting in an MJO that resembles observations. A sensitivity experiment with coupling in only the Indian Ocean reverses these improvements, suggesting coupling in the Maritime Continent and West Pacific is critical for propagation. We hypothesise that for AGCMs with a poor MJO, coupling provides a “crutch” to artificially augment MJO-like activity through high-frequency SST anomalies. In related experiments, we employ the KPP framework to analyse the impact of air–sea interactions in the fully coupled GA3.0, which at default F shows a similar MJO to uncoupled GA3.0. This is due to compensating effects: an improvement from coupling and a degradation from mean-state errors. Future studies on the role of coupling should carefully separate these effects.
Resumo:
In its default configuration, the Hadley Centre climate model (GA2.0) simulates roughly one-half the observed level of Madden–Julian oscillation activity, with MJO events often lasting fewer than seven days. We use initialised, climate-resolution hindcasts to examine the sensitivity of the GA2.0 MJO to a range of changes in sub-grid parameterisations and model configurations. All 22 changes are tested for two cases during the Years of Tropical Convection. Improved skill comes only from (a) disabling vertical momentum transport by convection and (b) increasing mixing entrainment and detrainment for deep and mid-level convection. These changes are subsequently tested in a further 14 hindcast cases; only (b) consistently improves MJO skill, from 12 to 22 days. In a 20-year integration, (b) produces near-observed levels of MJO activity, but propagation through the Maritime Continent remains weak. With default settings, GA2.0 produces precipitation too readily, even in anomalously dry columns. Implementing (b) decreases the efficiency of convection, permitting instability to build during the suppressed MJO phase and producing a more favourable environment for the active phase. The distribution of daily rain rates is more consistent with satellite data; default entrainment produces 6–12 mm/day too frequently. These results are consistent with recent studies showing that greater sensitivity of convection to moisture improves the representation of the MJO.
Resumo:
We demonstrate that summer precipitation biases in the South Asian monsoon domain are sensitive to increasing the convective parametrisation’s entrainment and detrainment rates in the Met Office Unified Model. We explore this sensitivity to improve our understanding of the biases and inform efforts to improve convective parametrisation. We perform novel targeted experiments in which we increase the entrainment and detrainment rates in regions of especially large precipitation bias. We use these experiments to determine whether the sensitivity at a given location is a consequence of the local change to convection or is a remote response to the change elsewhere. We find that a local change leads to different mean-state responses in comparable regions. When the entrainment and detrainment rates are increased globally, feedbacks between regions usually strengthen the local responses. We choose two regions of tropical ascent that show different mean-state responses, the western equatorial Indian Ocean and western north Pacific, and analyse them as case studies to determine the mechanisms leading to the different responses. Our results indicate that several aspects of a region’s mean-state, including moisture content, sea surface temperature and circulation, play a role in local feedbacks that determine the response to increased entrainment and detrainment.
Resumo:
The Plaut, McClelland, Seidenberg and Patterson (1996) connectionist model of reading was evaluated at two points early in its training against reading data collected from British children on two occasions during their first year of literacy instruction. First, the network’s non-word reading was poor relative to word reading when compared with the children. Second, the network made more non-lexical than lexical errors, the opposite pattern to the children. Three adaptations were made to the training of the network to bring it closer to the learning environment of a child: an incremental training regime was adopted; the network was trained on grapheme– phoneme correspondences; and a training corpus based on words found in children’s early reading materials was used. The modifications caused a sharp improvement in non-word reading, relative to word reading, resulting in a near perfect match to the children’s data on this measure. The modified network, however, continued to make predominantly non-lexical errors, although evidence from a small-scale implementation of the full triangle framework suggests that this limitation stems from the lack of a semantic pathway. Taken together, these results suggest that, when properly trained, connectionist models of word reading can offer insights into key aspects of reading development in children.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydro-graph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
Resumo:
The “cotton issue” has been a topic of several academic discussions for trade policy analysts. However the design of trade and agricultural policy in the EU and the USA has become a politically sensitive matter throughout the last five years. This study utilizing the Agricultural Trade Policy Simulation Model (ATPSM) aims to gain insights into the global cotton market, to explain why domestic support for cotton has become an issue, to quantify the impact of the new EU agricultural policy on the cotton sector, and to measure the effect of eliminating support policies on production and trade. Results indicate that full trade liberalization would lead the four West African countries to better terms of trade with the EU. If tariff reduction follows the so-called Swiss formula, world prices would increase by 3.5%.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.