50 resultados para Sensitivities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An overview is given of current issues concerning the coupling between the stratosphere and troposphere. The tropopause region, more generally the upper troposphere/lower stratosphere, is the region of direct contact where exchange of material takes place. Dynamical coupling through angular momentum transfer by waves occurs nonlocally, and provides a generally negative torque on the stratosphere which drives an equator-to-pole circulation (i.e., towards the Earth’s axis of rotation). This wave-driven circulation is the principal mechanism for intraseasonal and interannual variability in the extratropical stratosphere. Although such variability is generally dynamical in origin, there are important chemical and radiative feedbacks. The location of the tropopause has implications for radiative forcing of climate, through its effect on the distribution of relatively short-lived greenhouse gases (ozone and water vapour). Some outstanding puzzles in our current understanding are identified. Attention is focused on possible climate sensitivities, and how these may be tested and constrained. Results from the Canadian Middle Atmosphere Model (CMAM), a fully interactive radiative-chemical-dynamical general circulation model, are used to illustrate some of the points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10–90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new coefficient-based retrieval scheme for estimation of sea surface temperature (SST) from the Along Track Scanning Radiometer (ATSR) instruments. The new coefficients are banded by total column water vapour (TCWV), obtained from numerical weather prediction analyses. TCWV banding reduces simulated regional retrieval biases to < 0.1 K compared to biases ~ 0.2 K for global coefficients. Further, detailed treatment of the instrumental viewing geometry reduces simulated view-angle related biases from ~ 0.1 K down to < 0.005 K for dual-view retrievals using channels at 11 and 12 μm. A novel analysis of trade-offs related to the assumed noise level when defining coefficients is undertaken, and we conclude that adding a small nominal level of noise (0.01 K) is optimal for our purposes. When applied to ATSR observations, some inter-algorithm biases appear as TCWV-related differences in SSTs estimated from different channel combinations. The final step in coefficient determination is to adjust the offset coefficient in each TCWV band to match results from a reference algorithm. This reference uses the dual-view observations of 3.7 and 11 μm. The adjustment is independent of in situ measurements, preserving independence of the retrievals. The choice of reference is partly motivated by uncertainty in the calibration of the 12 μm of Advanced ATSR. Lastly, we model the sensitivities of the new retrievals to changes to TCWV and changes in true SST, confirming that dual-view SSTs are most appropriate for climatological applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transient and equilibrium sensitivity of Earth's climate has been calculated using global temperature, forcing and heating rate data for the period 1970–2010. We have assumed increased long-wave radiative forcing in the period due to the increase of the long-lived greenhouse gases. By assuming the change in aerosol forcing in the period to be zero, we calculate what we consider to be lower bounds to these sensitivities, as the magnitude of the negative aerosol forcing is unlikely to have diminished in this period. The radiation imbalance necessary to calculate equilibrium sensitivity is estimated from the rate of ocean heat accumulation as 0.37±0.03W m^−2 (all uncertainty estimates are 1−σ). With these data, we obtain best estimates for transient climate sensitivity 0.39±0.07K (W m^−2)^−1 and equilibrium climate sensitivity 0.54±0.14K (W m^−2)^−1, equivalent to 1.5±0.3 and 2.0±0.5K (3.7W m^−2)^−1, respectively. The latter quantity is equal to the lower bound of the ‘likely’ range for this quantity given by the 2007 IPCC Assessment Report. The uncertainty attached to the lower-bound equilibrium sensitivity permits us to state, within the assumptions of this analysis, that the equilibrium sensitivity is greater than 0.31K (W m^−2)^−1, equivalent to 1.16K(3.7W m^−2)^−1, at the 95% confidence level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Methyl benzimidazole carbamate (MBC) fungicides are used to control the oilseed rape pathogen Pyrenopeziza brassicae. Resistance to MBCs has been reported in P. brassicae, but the molecular mechanism(s) associated with reductions in sensitivity have not been verified in this species. Elucidation of the genetic changes responsible for resistance, hypothesised to be target-site mutations in β-tubulin, will enable resistance diagnostics and thereby inform resistance management strategies. RESULTS P. brassicae isolates were classified as sensitive, moderately resistant or resistant to MBCs. Crossing P. brassicae isolates of different MBC sensitivities indicated that resistance was conferred by a single gene. The MBC-target encoding gene β-tubulin was cloned and sequenced. Reduced MBC sensitivity of field isolates correlated with β-tubulin amino acid substitutions L240F and E198A. The highest level of MBC resistance was measured for isolates carrying E198A. Negative cross-resistance between MBCs and the fungicides diethofencarb and zoxamide was only measured in E198A isolates. PCR-RFLP was used to screen isolates for the presence of L240F and E198A. The substitutions E198G and F200Y were also detected in DNA samples from P. brassicae populations after cloning and sequencing of PCR products. The frequencies of L240F and E198A in different P. brassicae populations were quantified by pyrosequencing. There were no differences in the frequencies of these alleles between P. brassicae populations sampled from different locations or after fungicide treatment regimes. CONCLUSIONS The molecular mechanisms affecting sensitivity to MBCs in P. brassicae have been identified. Pyrosequencing assays are a powerful tool for quantifying fungicide-resistant alleles in pathogen populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The incidence and severity of light leaf spot epidemics caused by the ascomycete fungus Pyrenopeziza brassicae on UK oilseed rape crops is increasing. The disease is currently controlled by a combination of host resistance, cultural practices and fungicide applications. We report decreases in sensitivities of modern UK P. brassicae isolates to the azole (imidazole and triazole) class of fungicides. By cloning and sequencing the P. brassicae CYP51 (PbCYP51) gene, encoding the azole target sterol 14α-demethylase, we identified two non-synonymous mutations encoding substitutions G460S and S508T associated with reduced azole sensitivity. We confirmed the impact of the encoded PbCYP51 changes on azole sensitivity and protein activity by heterologous expression in a Saccharomyces cerevisiae mutant YUG37::erg11 carrying a controllable promoter of native CYP51 expression. In addition, we identified insertions in the predicted regulatory regions of PbCYP51 in isolates with reduced azole sensitivity. The presence of these insertions was associated with enhanced transcription of PbCYP51 in response to sub-inhibitory concentrations of the azole fungicide tebuconazole. Genetic analysis of in vitro crosses of sensitive and resistant isolates confirmed the impact of PbCYP51 alterations in coding and regulatory sequences on a reduced sensitivity phenotype, as well as identifying a second major gene at another locus contributing to resistance in some isolates. The least sensitive field isolates carry combinations of upstream insertions and non-synonymous mutations, suggesting PbCYP51 evolution is on-going and the progressive decline in azole sensitivity of UK P. brassicae populations will continue. The implications for the future control of light leaf spot are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mass loss by glaciers has been an important contributor to sea level rise in the past, and is projected to contribute a substantial fraction of total sea level rise during the 21st century. Here, we use a model of the world's glaciers to quantify equilibrium sensitivities of global glacier mass to climate change, and to investigate the role of changes in glacier hypsometry for long-term mass changes. We find that 21st century glacier-mass loss is largely governed by the glacier's response to 20th century climate change. This limits the influence of 21st century climate change on glacier-mass loss, and explains why there are relatively small differences in glacier-mass loss under greatly different scenarios of climate change. The projected future changes in both temperature and precipitation experienced by glaciers are amplified relative to the global average. The projected increase in precipitation partly compensates for the mass loss caused by warming, but this compensation is negligible at higher temperature anomalies since an increasing fraction of precipitation at the glacier sites is liquid. Loss of low-lying glacier area, and more importantly, eventual complete disappearance of glaciers, strongly limit the projected sea level contribution from glaciers in coming centuries. The adjustment of glacier hypsometry to changes in the forcing strongly reduces the rates of global glacier-mass loss caused by changes in global mean temperature compared to rates of mass loss when hypsometric changes are neglected. This result is a second reason for the relatively weak dependence of glacier-mass loss on future climate scenario, and helps explain why glacier-mass loss in the first half of the 20th century was of the same order of magnitude as in the second half of the 20th century, even though the rate of warming was considerably smaller.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pronounced intermodel differences in the projected response of land surface precipitation (LSP) to future anthropogenic forcing remain in the Coupled Model Intercomparison Project Phase 5 model integrations. A large fraction of the intermodel spread in projected LSP trends is demonstrated here to be associated with systematic differences in simulated sea surface temperature (SST) trends, especially the representation of changes in (i) the interhemispheric SST gradient and (ii) the tropical Pacific SSTs. By contrast, intermodel differences in global mean SST, representative of differing global climate sensitivities, exert limited systematic influence on LSP patterns. These results highlight the importance to regional terrestrial precipitation changes of properly simulating the spatial distribution of large-scale, remote changes as reflected in the SST response to increasing greenhouse gases. Moreover, they provide guidance regarding which region-specific precipitation projections may be potentially better constrained for use in climate change impact assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Literatures have shown that Internet gaming disorder (IGD) subjects show impaired executive control and enhanced reward sensitivities than healthy controls. However, how these two networks jointly affect the valuation process and drive IGD subjects' online-game-seeking behaviors remains unknown. Thirty-five IGD and 36 healthy controls underwent a resting-states scan in the MRI scanner. Functional connectivity (FC) was examined within control and reward network seeds regions, respectively. Nucleus accumbens (NAcc) was selected as the node to find the interactions between these two networks. IGD subjects show decreased FC in the executive control network and increased FC in the reward network when comparing with the healthy controls. When examining the correlations between the NAcc and the executive control/reward networks, the link between the NAcc - executive control network is negatively related with the link between NAcc - reward network. The changes (decrease/increase) in IGD subjects' brain synchrony in control/reward networks suggest the inefficient/overly processing within neural circuitry underlying these processes. The inverse proportion between control network and reward network in IGD suggest that impairments in executive control lead to inefficient inhibition of enhanced cravings to excessive online game playing. This might shed light on the mechanistic understanding of IGD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earlier accounting works have shown that an understanding of agenda entry is critical to better understanding the accounting standards setting process. Consider Walker and Robinson (1993; 1994) and Ryan (1998); and more generally agenda entrance as theorized in Kingdon (2011). In 2003, the IASB placed on its agenda a project to promulgate a standard for small and medium-sized entities (SMEs). This provides our focus. It seemed to be a departure from the IASB’s constitutional focus on capital market participants. Kingdon’s three-streams model of agenda entry helps to identify some of the complexities related to politics and decision making messiness that resulted in a standard setting project for simplified IFRS, misleadingly titled IFRS for SMEs. Complexities relate to the broader international regulatory context, including the boundaries of the IASB’s standard-setting jurisdiction, the role of board members in changing those boundaries, and such sensitivities over the language that the IASB could not agree on a suitably descriptive title. The paper shows similarities with earlier agenda entrance studies by Walker and Robinson (1994) and Ryan (1998). By drawing on interviewees’ recollections and other material it especially reinforces the part played by the nuanced complexities that influence what emerges as an international accounting standard.