109 resultados para response locked analysis
Resumo:
The 11-yr solar cycle temperature response to spectrally resolved solar irradiance changes and associated ozone changes is calculated using a fixed dynamical heating (FDH) model. Imposed ozone changes are from satellite observations, in contrast to some earlier studies. A maximum of 1.6 K is found in the equatorial upper stratosphere and a secondary maximum of 0.4 K in the equatorial lower stratosphere, forming a double peak in the vertical. The upper maximum is primarily due to the irradiance changes while the lower maximum is due to the imposed ozone changes. The results compare well with analyses using the 40-yr ECMWF Re-Analysis (ERA-40) and NCEP/NCAR datasets. The equatorial lower stratospheric structure is reproduced even though, by definition, the FDH calculations exclude dynamically driven temperature changes, suggesting an important role for an indirect dynamical effect through ozone redistribution. The results also suggest that differences between the Stratospheric Sounding Unit (SSU)/Microwave Sounding Unit (MSU) and ERA-40 estimates of the solar cycle signal can be explained by the poor vertical resolution of the SSU/MSU measurements. The adjusted radiative forcing of climate change is also investigated. The forcing due to irradiance changes was 0.14 W m−2, which is only 78% of the value obtained by employing the standard method of simple scaling of the total solar irradiance (TSI) change. The difference arises because much of the change in TSI is at wavelengths where ozone absorbs strongly. The forcing due to the ozone change was only 0.004 W m−2 owing to strong compensation between negative shortwave and positive longwave forcings.
Resumo:
Nitrogen oxide biogenic emissions from soils are driven by soil and environmental parameters. The relationship between these parameters and NO fluxes is highly non linear. A new algorithm, based on a neural network calculation, is used to reproduce the NO biogenic emissions linked to precipitations in the Sahel on the 6 August 2006 during the AMMA campaign. This algorithm has been coupled in the surface scheme of a coupled chemistry dynamics model (MesoNH Chemistry) to estimate the impact of the NO emissions on NOx and O3 formation in the lower troposphere for this particular episode. Four different simulations on the same domain and at the same period are compared: one with anthropogenic emissions only, one with soil NO emissions from a static inventory, at low time and space resolution, one with NO emissions from neural network, and one with NO from neural network plus lightning NOx. The influence of NOx from lightning is limited to the upper troposphere. The NO emission from soils calculated with neural network responds to changes in soil moisture giving enhanced emissions over the wetted soil, as observed by aircraft measurements after the passing of a convective system. The subsequent enhancement of NOx and ozone is limited to the lowest layers of the atmosphere in modelling, whereas measurements show higher concentrations above 1000 m. The neural network algorithm, applied in the Sahel region for one particular day of the wet season, allows an immediate response of fluxes to environmental parameters, unlike static emission inventories. Stewart et al (2008) is a companion paper to this one which looks at NOx and ozone concentrations in the boundary layer as measured on a research aircraft, examines how they vary with respect to the soil moisture, as indicated by surface temperature anomalies, and deduces NOx fluxes. In this current paper the model-derived results are compared to the observations and calculated fluxes presented by Stewart et al (2008).
Resumo:
We have previously placed the solar contribution to recent global warming in context using observations and without recourse to climate models. It was shown that all solar forcings of climate have declined since 1987. The present paper extends that analysis to include the effects of the various time constants with which the Earth’s climate system might react to solar forcing. The solar input waveform over the past 100 years is defined using observed and inferred galactic cosmic ray fluxes, valid for either a direct effect of cosmic rays on climate or an effect via their known correlation with total solar irradiance (TSI), or for a combination of the two. The implications, and the relative merits, of the various TSI composite data series are discussed and independent tests reveal that the PMOD composite used in our previous paper is the most realistic. Use of the ACRIM composite, which shows a rise in TSI over recent decades, is shown to be inconsistent with most published evidence for solar influences on pre-industrial climate. The conclusions of our previous paper, that solar forcing has declined over the past 20 years while surface air temperatures have continued to rise, are shown to apply for the full range of potential time constants for the climate response to the variations in the solar forcings.
Resumo:
A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.
Resumo:
The identification and characterization of differential gene expression from tissues subjected to stress has gained much attention in plant research. The recognition of elements involved in the response to a particular stress enhances the possibility of promoting crop improvement through direct genetic modification. However, the performance of some of the 'first generation' of transgenic plants with the incorporation of a single gene has not always been as expected. These results have stimulated the development of new transgenic constructions introducing more than one gene and capable of modifying complex pathways. Several techniques are available to conduct the analysis of gene regulation, with such information providing the basis for novel constructs specifically designed to modify metabolism. This review deals with techniques that allow the identification and characterization of differentially-expressed genes and the use of molecular pathway information to produce transgenic plants.
Resumo:
The distribution of the daily wintertime North Atlantic Oscillation (NAO) index in the 40-yr ECMWF Re-Analysis (ERA-40) is significantly negatively skewed. Dynamical and statistical analyses both suggest that this skewness reflects the presence of two distinct regimes—referred to as “Greenland blocking” and “subpolar jet.” Changes in both the relative occurrence and in the structure of the regimes are shown to contribute to the long-term NAO trend over the ERA-40 period. This is contrasted with the simulation of the NAO in 100-yr control and doubled CO2 integrations of the third climate configuration of the Met Office Unified Model (HadCM3). The model has clear deficiencies in its simulation of the NAO in the control run, so its predictions of future behavior must be treated with caution. However, the subpolar jet regime does become more dominant under anthropogenic forcing and, while this change is small it is clearly statistically significant and does represent a real change in the nature of NAO variability in the model.
Resumo:
As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Biological emergencies such as the appearance of an exotic transboundary or emerging disease can become disasters. The question that faces Veterinary Services in developing countries is how to balance resources dedicated to active insurance measures, such as border control, surveillance, working with the governments of developing countries, and investing in improving veterinary knowledge and tools, with passive measures, such as contingency funds and vaccine banks. There is strong evidence that the animal health situation in developed countries has improved and is relatively stable. In addition, through trade with other countries, developing countries are becoming part of the international animal health system, the status of which is improving, though with occasional setbacks. However, despite these improvements, the risk of a possible biological disaster still remains, and has increased in recent times because of the threat of bioterrorism. This paper suggests that a model that combines decision tree analysis with epidemiology is required to identify critical points in food chains that should be strengthened to reduce the risk of emergencies and prevent emergencies from becoming disasters.
Resumo:
Field experiments were conducted in field bean in the north-eastern part of the Republic of Croatia to compare weed control and crop response under different management practices within the critical period of field bean production. The practices consisted in broadcast application of labelled rate of preemergence herbicide (PRE) and postemergence herbicide application: (POST) broadcast, band application over the rows, and band application combined with mechanical cultivation using of different herbicide doses recommended by the manufacturer (2x, 1x, 1/2x, 1/4x, 1/8x). In 1999, weed control with PRE application of pendimethalin was superior to POST bentazone application due to late emergence of weeds and lack of residual herbicide control. In 2000 bentazone combined with cycloxydim controlled weeds in field bean better than PRE herbicide application. Based on the results of this research, single PRE or POST application of herbicide did not control a broad spectrum of weeds and did not provide the commercially acceptable full season control. Reduced rates of herbicide are not advisable tinder high weed pressure.
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The platelet surface is a dynamic interface that changes rapidly in response to stimuli to coordinate the formation of thrombi at sites of vascular injury. Tight control is essential as loss of organisation may result in the inappropriate formation of thrombi (thrombosis) or excessive bleeding. In this paper we describe the comparative analysis of resting and thrombin-stimulated platelet membrane proteomes and associated proteins to identify proteins important to platelet function. Surface proteins were labelled using a biotin tag and isolated by NeurtrAvidin affinity chromatography. Liquid phase IEF and SDS-PAGE were used to separate proteins, and bands of increased intensity in the stimulated platelet fractions were digested and identified by FT-ICR mass spectrometry. Novel proteins were identified along with proteins known to be translocated to the platelet surface. Furthermore, many platelet proteins revealed changes in location associated with function, including G6B and Hip-55. HIP-55 is an SH3-binding protein important in T-cell receptor signalling. Further analysis of HIP-55 revealed that this adaptor protein becomes increasingly associated with both Syk and integrin beta 3 upon platelet activation. Analysis of HIP-55 deficient platelets revealed reduced fibrinogen binding upon thrombin stimulation, suggesting HIP-55 to be an important regulator of platelet function.
Resumo:
Individuals are typically co-infected by a diverse community of microparasites (e.g. viruses or protozoa) and macroparasites (e.g. helminths). Vertebrates respond to these parasites differently, typically mounting T helper type 1 (Th1) responses against microparasites and Th2 responses against macroparasites. These two responses may be antagonistic such that hosts face a 'decision' of how to allocate potentially limiting resources. Such decisions at the individual host level will influence parasite abundance at the population level which, in turn, will feed back upon the individual level. We take a first step towards a complete theoretical framework by placing an analysis of optimal immune responses under microparasite-macroparasite co-infection within an epidemiological framework. We show that the optimal immune allocation is quantitatively sensitive to the shape of the trade-off curve and qualitatively sensitive to life-history traits of the host, microparasite and macroparasite. This model represents an important first step in placing optimality models of the immune response to co-infection into an epidemiological framework. Ultimately, however, a more complete framework is needed to bring together the optimal strategy at the individual level and the population-level consequences of those responses, before we can truly understand the evolution of host immune responses under parasite co-infection.
Resumo:
The technical comments by Getz and Lloyd-Smith, Ross, and Doncaster focus on specific aspects of our analysis and estimation and do not demonstrate any results opposing our key conclusion-that, contrary to what was previously believed, the relation between a population's growth rate (pgr) and its density is generally concave.
Resumo:
Differential protein expression analysis based on modification of selected amino acids with labelling reagents has become the major method of choice for quantitative proteomics. One such methodology, two-dimensional difference gel electrophoresis (2-D DIGE), uses a matched set of fluorescent N-hydroxysuccinimidyl (NHS) ester cyanine dyes to label lysine residues in different samples which can be run simultaneously on the same gels. Here we report the use of iodoacetylated cyanine (ICy) dyes (for labelling of cysteine thiols, for 2-D DIGE-based redox proteomics. Characterisation of ICy dye labelling in relation to its stoichiometry, sensitivity and specificity is described, as well as comparison of ICy dye with NHS-Cy dye labelling and several protein staining methods. We have optimised conditions for labelling of nonreduced, denatured samples and report increased sensitivity for a subset of thiol-containing proteins, allowing accurate monitoring of redox-dependent thiol modifications and expression changes. Cysteine labelling was then combined with lysine labelling in a multiplex 2-D DIGE proteomic study of redox-dependent and ErbB2-dependent changes in epithelial cells exposed to oxidative stress. This study identifies differentially modified proteins involved in cellular redox regulation, protein folding, proliferative suppression, glycolysis and cytoskeletal organisation, revealing the complexity of the response to oxidative stress and the impact that overexpression of ErbB2 has on this response.