789 resultados para Digit Span
Resumo:
In this paper, we propose a scenario framework that could provide a scenario “thread” through the different climate research communities (climate change – vulnerability, impact, and adaptation (VIA) and mitigation) in order to provide assessment of mitigation and adaptation strategies and other VIA challenges. The scenario framework is organised around a matrix with two main axes: radiative forcing levels and socio-economic conditions. The radiative forcing levels (and the associated climate signal) are described by the new Representative Concentration Pathways. The second axis, socio-economic developments, comprises elements that affect the capacity for mitigation and adaptation, as well as the exposure to climate impacts. The proposed scenarios derived from this framework are limited in number, allow for comparison across various mitigation and adaptation levels, address a range of vulnerability characteristics, provide information across climate forcing and vulnerability states and span a full century time scale. Assessments based on the proposed scenario framework would strengthen cooperation between integrated-assessment modelers, climate modelers and vulnerability, impact and adaptation researchers, and most importantly, facilitate the development of more consistent and comparable research within and across communities.
Resumo:
We present an intercomparison and verification analysis of 20 GCMs (Global Circulation Models) included in the 4th IPCC assessment report regarding their representation of the hydrological cycle on the Danube river basin for 1961–2000 and for the 2161–2200 SRESA1B scenario runs. The basin-scale properties of the hydrological cycle are computed by spatially integrating the precipitation, evaporation, and runoff fields using the Voronoi-Thiessen tessellation formalism. The span of the model- simulated mean annual water balances is of the same order of magnitude of the observed Danube discharge of the Delta; the true value is within the range simulated by the models. Some land components seem to have deficiencies since there are cases of violation of water conservation when annual means are considered. The overall performance and the degree of agreement of the GCMs are comparable to those of the RCMs (Regional Climate Models) analyzed in a previous work, in spite of the much higher resolution and common nesting of the RCMs. The reanalyses are shown to feature several inconsistencies and cannot be used as a verification benchmark for the hydrological cycle in the Danubian region. In the scenario runs, for basically all models the water balance decreases, whereas its interannual variability increases. Changes in the strength of the hydrological cycle are not consistent among models: it is confirmed that capturing the impact of climate change on the hydrological cycle is not an easy task over land areas. Moreover, in several cases we find that qualitatively different behaviors emerge among the models: the ensemble mean does not represent any sort of average model, and often it falls between the models’ clusters.
Resumo:
In recent years, various efforts have been made in air traffic control (ATC) to maintain traffic safety and efficiency in the face of increasing air traffic demands. ATC is a complex process that depends to a large degree on human capabilities, and so understanding how controllers carry out their tasks is an important issue in the design and development of ATC systems. In particular, the human factor is considered to be a serious problem in ATC safety and has been identified as a causal factor in both major and minor incidents. There is, therefore, a need to analyse the mechanisms by which errors occur due to complex factors and to develop systems that can deal with these errors. From the cognitive process perspective, it is essential that system developers have an understanding of the more complex working processes that involve the cooperative work of multiple controllers. Distributed cognition is a methodological framework for analysing cognitive processes that span multiple actors mediated by technology. In this research, we attempt to analyse and model interactions that take place in en route ATC systems based on distributed cognition. We examine the functional problems in an ATC system from a human factors perspective, and conclude by identifying certain measures by which to address these problems. This research focuses on the analysis of air traffic controllers' tasks for en route ATC and modelling controllers' cognitive processes.
Resumo:
We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.
Resumo:
1. Large female insects usually have high potential fecundity. Therefore selection should favour an increase in body size given that these females get opportunities to realize their potential advantage by maturing and laying more eggs. However, ectotherm physiology is strongly temperature-dependent, and activities are carried out sufficiently only within certain temperature ranges. Thus it remains unclear if the fecundity advantage of a large size is fully realized in natural environments, where thermal conditions are limiting. 2. Insect fecundity might be limited by temperature at two levels; first eggs need to mature, and then the female needs time for strategic ovipositing of the egg. Since a female cannot foresee the number of oviposition opportunities that she will encounter on a given day, the optimal rate of egg maturation will be governed by trade-offs associated with egg- and time-limited oviposition. As females of different sizes will have different amounts of body reserves, size-dependent allocation trade-offs between the mother’s condition and her egg production might be expected. 3. In the temperate butterfly Pararge aegeria , the time and temperature dependence of oviposition and egg maturation, and the interrelatedness of these two processes were investigated in a series of laboratory experiments, allowing a decoupling of the time budgets for the respective processes. 4. The results show that realized fecundity of this species can be limited by both the temperature dependence of egg maturation and oviposition under certain thermal regimes. Furthermore, rates of oviposition and egg maturation seemed to have regulatory effects upon each other. Early reproductive output was correlated with short life span, indicating a cost of reproduction. Finally, large females matured more eggs than small females when deprived of oviposition opportunities. Thus, the optimal allocation of resources to egg production seems dependent on female size. 5. This study highlights the complexity of processes underlying rates of egg maturation and oviposition in ectotherms under natural conditions. We further discuss the importance of temperature variation for egg- vs. time-limited fecundity and the consequences for the evolution of female body size in insects.
Resumo:
Sediments from the Black Sea, a region historically dominated by forests and steppe landscapes, are a valuable source of detailed information on the changes in regional terrestrial and aquatic environments at decadal to millennial scales. Here we present multi-proxy environmental records (pollen, dinoflagellate cysts, Ca, Ti and oxygen isotope data) from the uppermost 305 cm of the core 22-GC3 (42°13.53′N, 36°29.55′E) collected from a water depth of 838 m in the southern part of the Black Sea in 2007. The records span the last ~ 18 kyr (all ages are given in cal kyr BP). The pollen data reveal the dominance of the Artemisia-steppe in the region, suggesting rather dry/cold environments ~ 18–14.5 kyr BP. Warming/humidity increase during melt-water pulses (~ 16.1–14.5 kyr BP), indicated by δ18O records from the 22-GC3 core sediment and from the Sofular Cave stalagmite, is expressed in more negative δ13C values from the Sofular Cave, usually interpreted as the spreading of C3 plants. The records representing the interstadial complex (~ 14.5–12.9 kyr BP) show an increase in temperature and moisture, indicated by forest development, increased primary productivity and reduced surface run-off, whereas the switch from primary terrigenous to primary authigenic Ca origin occurs ~ 500 yr later. The Younger Dryas cooling is clearly demonstrated by more negative δ13C values from the Sofular Cave and a reduction of pines. The early Holocene (11.7–8.5 kyr BP) interval reveals relatively dry conditions compared to the mostly moist and warm middle Holocene (8.5–5 kyr BP), which is characterized by the establishment of the species-rich warm mixed and temperate deciduous forests in the low elevation belt, temperate deciduous beech-hornbeam forests in the middle and cool conifer forest in upper mountain belt. The border between the early and middle Holocene in the vegetation records coincides with the opening of the Mediterranean corridor at ~ 8.3 kyr BP, as indicated by a marked change in the dinocyst assemblages and in the sediment lithology. Changes in the pollen assemblages indicate a reduction in forest cover after ~ 5 kyr BP, which was likely caused by increased anthropogenic pressure on the regional vegetation.
Resumo:
Particulate matter generated during the cooking process has been identified as one of the major problems of indoor air quality and indoor environmental health. Reliable assessment of exposure to cooking-generated particles requires accurate information of emission characteristics especially the size distribution. This study characterizes the volume/mass-based size distribution of the fume particles at the oil-heating stage for the typical Chinese-style cooking in a laboratory kitchen. A laser-diffraction size analyzer is applied to measure the volume frequency of fume particles ranged from 0.1 to 10 μm, which contribute to most mass proportion in PM2.5 and PM10. Measurements show that particle emissions have little dependence on the types of vegetable oil used but have a close relationship with the heating temperature. It is found that volume frequency of fume particles in the range of 1.0–4.0 μm accounts for nearly 100% of PM0.1–10 with the mode diameter 2.7 μm, median diameter 2.6 μm, Sauter mean diameter 3.0 μm, DeBroukere mean diameter 3.2 μm, and distribution span 0.48. Such information on emission characteristics obtained in this study can be possibly used to improve the assessment of indoor air quality due to PM0.1–10 in the kitchen and residential flat.
Resumo:
The interplay between dietary nutrients, gut microbiota and mammalian host tissues of the gastrointestinal tract is recognised as highly relevant for host health. Combined transcriptome, metabonome and microbial profiling tools were employed to analyse the dynamic responses of germfree mouse colonic mucosa to colonisation by normal mouse microbiota (conventionalisation) at different time-points during 16 days. The colonising microbiota showed a shift from early (days 1 and 2) to later colonisers (days 8 and 16). The dynamic changes in the microbial community were rapidly reflected by the urine metabolic profiles (day 1) and at later stages (day 4 onward) by the colon mucosa transcriptome and metabolic profiles. Correlations of host transcriptomes, metabolite patterns and microbiota composition revealed associations between Bacilli and Proteobacteria, and differential expression of host genes involved in energy and anabolic metabolism. Differential gene expression correlated with scyllo- and myo-inositol, glutamine, glycine and alanine levels in colonic tissues during the time span of conventionalisation. Our combined time-resolved analyses may help to expand the understanding of host-microbe molecular interactions during the microbial establishment.
Resumo:
We present the results of simulations carried out with the Met Office Unified Model at 12km, 4km and 1.5km resolution for a large region centred on West Africa using several different representations of the convection processes. These span the range of resolutions from much coarser than the size of the convection processes to the cloud-system resolving and thus encompass the intermediate "grey-zone". The diurnal cycle in the extent of convective regions in the models is tested against observations from the Geostationary Earth Radiation Budget instrument on Meteosat-8. By this measure, the two best-performing simulations are a 12km model without convective parametrization, using Smagorinsky style sub-grid scale mixing in all three dimensions and a 1.5km simulations with two-dimensional Smagorinsky mixing. Of these, the 12km model produces a better match to the magnitude of the total cloud fraction but the 1.5km results in better timing for its peak value. The results suggest that the previously-reported improvement in the representation of the diurnal cycle of convective organisation in the 4km model compared to the standard 12km configuration is principally a result of the convection scheme employed rather than the improved resolution per se. The details of and implications for high-resolution model simulations are discussed.
Resumo:
We use new neutron scattering instrumentation to follow in a single quantitative time-resolving experiment, the three key scales of structural development which accompany the crystallisation of synthetic polymers. These length scales span 3 orders of magnitude of the scattering vector. The study of polymer crystallisation dates back to the pioneering experiments of Keller and others who discovered the chain-folded nature of the thin lamellae crystals which are normally found in synthetic polymers. The inherent connectivity of polymers makes their crystallisation a multiscale transformation. Much understanding has developed over the intervening fifty years but the process has remained something of a mystery. There are three key length scales. The chain folded lamellar thickness is ~ 10nm, the crystal unit cell is ~ 1nm and the detail of the chain conformation is ~ 0.1nm. In previous work these length scales have been addressed using different instrumention or were coupled using compromised geometries. More recently researchers have attempted to exploit coupled time-resolved small-angle and wide-angle x-ray experiments. These turned out to be challenging experiments much related to the challenge of placing the scattering intensity on an absolute scale. However, they did stimulate the possibility of new phenomena in the very early stages of crystallisation. Although there is now considerable doubt on such experiments, they drew attention to the basic question as to the process of crystallisation in long chain molecules. We have used NIMROD on the second target station at ISIS to follow all three length scales in a time-resolving manner for poly(e-caprolactone). The technique can provide a single set of data from 0.01 to 100Å-1 on the same vertical scale. We present the results using a multiple scale model of the crystallisation process in polymers to analyse the results.
Resumo:
Polygalacturonase-inhibiting proteins (PGIPs) are extracellular plant inhibitors of fungal endopolygalacturonases (PGs) that belong to the superfamily of Leu-rich repeat proteins. We have characterized the full complement of pgip genes in the bean (Phaseolus vulgaris) genotype BAT93. This comprises four clustered members that span a 50-kb region and, based on their similarity, form two pairs (Pvpgip1/Pvpgip2 and Pvpgip3/Pvpgip4). Characterization of the encoded products revealed both partial redundancy and subfunctionalization against fungal-derived PGs. Notably, the pair PvPGIP3/PvPGIP4 also inhibited PGs of two mirid bugs (Lygus rugulipennis and Adelphocoris lineolatus). Characterization of Pvpgip genes of Pinto bean showed variations limited to single synonymous substitutions or small deletions. A three-amino acid deletion encompassing a residue previously identified as crucial for recognition of PG of Fusarium moniliforme was responsible for the inability of BAT93 PvPGIP2 to inhibit this enzyme. Consistent with the large variations observed in the promoter sequences, reverse transcription-PCR expression analysis revealed that the different family members differentially respond to elicitors, wounding, and salicylic acid. We conclude that both biochemical and regulatory redundancy and subfunctionalization of pgip genes are important for the adaptation of plants to pathogenic fungi and phytophagous insects.
Resumo:
Data from various stations having different measurement record periods between 1988 and 2007 are analyzed to investigate the surface ozone concentration, long-term trends, and seasonal changes in and around Ireland. Time series statistical analysis is performed on the monthly mean data using seasonal and trend decomposition procedures and the Box-Jenkins approach (autoregressive integrated moving average). In general, ozone concentrations in the Irish region are found to have a negative trend at all sites except at the coastal sites of Mace Head and Valentia. Data from the most polluted Dublin city site have shown a very strong negative trend of −0.33 ppb/yr with a 95% confidence limit of 0.17 ppb/yr (i.e., −0.33 ± 0.17) for the period 2002−2007, and for the site near the city of Cork, the trend is found to be −0.20 ± 0.11 ppb/yr over the same period. The negative trend for other sites is more pronounced when the data span is considered from around the year 2000 to 2007. Rural sites of Wexford and Monaghan have also shown a very strong negative trend of −0.99 ± 0.13 and −0.58 ± 0.12, respectively, for the period 2000−2007. Mace Head, a site that is representative of ozone changes in the air advected from the Atlantic to Europe in the marine planetary boundary layer, has shown a positive trend of about +0.16 ± 0.04 ppb per annum over the entire period 1988−2007, but this positive trend has reduced during recent years (e.g., in the period 2001−2007). Cluster analysis for back trajectories are performed for the stations having a long record of data, Mace Head and Lough Navar. For Mace Head, the northern and western clean air sectors have shown a similar positive trend (+0.17 ± 0.02 ppb/yr for the northern sector and +0.18 ± 0.02 ppb/yr for the western sector) for the whole period, but partial analysis for the clean western sector at Mace Head shows different trends during different time periods with a decrease in the positive trend since 1988 indicating a deceleration in the ozone trend for Atlantic air masses entering Europe.
Resumo:
A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.
Resumo:
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10–90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.