937 resultados para share of no par value
Resumo:
The Greenland ice sheet will decline in volume in a warmer climate. If a sufficiently warm climate is maintained for a few thousand years, the ice sheet will be completely melted. This raises the question of whether the decline would be reversible: would the ice sheet regrow if the climate cooled down? To address this question, we conduct a number of experiments using a climate model and a high-resolution ice-sheet model. The experiments are initialised with ice sheet states obtained from various points during its decline as simulated in a high-CO2 scenario, and they are then forced with a climate simulated for pre-industrial greenhouse gas concentrations, to determine the possible trajectories of subsequent ice sheet evolution. These trajectories are not the reverse of the trajectory during decline. They converge on three different steady states. The original ice-sheet volume can be regained only if the volume has not fallen below a threshold of irreversibility, which lies between 80 and 90% of the original value. Depending on the degree of warming and the sensitivity of the climate and the ice-sheet, this point of no return could be reached within a few hundred years, sooner than CO2 and global climate could revert to a pre-industrial state, and in that case global sea level rise of at least 1.3 m would be irreversible. An even larger irreversible change to sea level rise of 5 m may occur if ice sheet volume drops below half of its current size. The set of steady states depends on the CO2 concentration. Since we expect the results to be quantitatively affected by resolution and other aspects of model formulation, we would encourage similar investigations with other models.
Observations of the depth of ice particle evaporation beneath frontal cloud to improve NWP modelling
Resumo:
The evaporation (sublimation) of ice particles beneath frontal ice cloud can provide a significant source of diabatic cooling which can lead to enhanced slantwise descent below the frontal surface. The strength and vertical extent of the cooling play a role in determining the dynamic response of the atmosphere, and an adequate representation is required in numerical weather-prediction (NWP) models for accurate forecasts of frontal dynamics. In this paper, data from a vertically pointing 94 GHz radar are used to determine the characteristic depth-scale of ice particle sublimation beneath frontal ice cloud. A statistical comparison is made with equivalent data extracted from the NWP mesoscale model operational at the Met Office, defining the evaporation depth-scale as the distance for the ice water content to fall to 10% of its peak value in the cloud. The results show that the depth of the ice evaporation zone derived from observations is less than 1 km for 90% of the time. The model significantly overestimates the sublimation depth-scales by a factor of between two and three, and underestimates the local ice water content by a factor of between two and four. Consequently the results suggest the model significantly underestimates the strength of the evaporative cooling, with implications for the prediction of frontal dynamics. A number of reasons for the model discrepancy are suggested. A comparison with radiosonde relative humidity data suggests part of the overestimation in evaporation depth may be due to a high RH bias in the dry slot beneath the frontal cloud, but other possible reasons include poor vertical resolution and deficiencies in the evaporation rate or ice particle fall-speed parametrizations.
Resumo:
The ability of chlorogenic acid to inhibit oxidation of human low-density lipoprotein (LDL) was studied by in vitro copper-induced LDL oxidation. The effect of chlorogenic acid on the lag time before LDL oxidation increased in a dose dependent manner by up to 176% of the control value when added at concentrations of 0.25 -1.0 μM. Dose dependent increases in lag time of LDL oxidation were also observed, but at much higher concentrations, when chlorogenic acid was incubated with LDL (up to 29.7% increase in lag phase for 10 μM chlorogenic acid) or plasma (up to 16.6% increase in lag phase for 200 μM chlorogenic acid) prior to isolation of LDL, and this indicated that chlorogenic acid was able to bind, at least weakly, to LDL. Bovine serum albumin (BSA) increased the oxidative stability of LDL in the presence of chlorogenic acid. Fluorescence spectroscopy showed that chlorogenic acid binds to BSA with a binding constant of 3.88 x 104 M-1. BSA increased the antioxidant effect of chlorogenic acid, and this was attributed to copper ions binding to BSA, thereby reducing the amount of copper available for inducing lipid peroxidation.
Resumo:
The kilogram, the base unit of mass in the International System of Units (SI), is defined as the mass m(K) of the international prototype of the kilogram. Clearly, this definition has the effect of fixing the value of m(K) to be one kilogram exactly. In this paper, we review the benefits that would accrue if the kilogram were redefined so as to fix the value of either the Planck constant h or the Avogadro constant NA instead of m(K), without waiting for the experiments to determine h or NA currently underway to reach their desired relative standard uncertainty of about 10−8. A significant reduction in the uncertainties of the SI values of many other fundamental constants would result from either of these new definitions, at the expense of making the mass m(K) of the international prototype a quantity whose value would have to be determined by experiment. However, by assigning a conventional value to m(K), the present highly precise worldwide uniformity of mass standards could still be retained. The advantages of redefining the kilogram immediately outweigh any apparent disadvantages, and we review the alternative forms that a new definition might take.
Resumo:
Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
We consider boundary value problems for the N-wave interaction equations in one and two space dimensions, posed for x [greater-or-equal, slanted] 0 and x,y [greater-or-equal, slanted] 0, respectively. Following the recent work of Fokas, we develop an inverse scattering formalism to solve these problems by considering the simultaneous spectral analysis of the two ordinary differential equations in the associated Lax pair. The solution of the boundary value problems is obtained through the solution of a local Riemann–Hilbert problem in the one-dimensional case, and a nonlocal Riemann–Hilbert problem in the two-dimensional case.
Resumo:
A model was published by Lewis et al. (2002) to predict the mean age at first egg (AFE) for pullets of laying strains reared under non-limiting environmental conditions and exposed to a single change in photoperiod during the rearing stage. Subsequently, Lewis et al. (2003) reported the effects of two opposing changes in photoperiod, which showed that the first change appears to alter the pullet's physiological age so that it responds to the second change as though it had been given at an earlier age (if photoperiod was decreased), or later age (if photoperiod was increased) than the true chronological age. During the construction of a computer model based on these two publications, it became apparent that some of the components of the models needed adjustment. The amendments relate to (1) the standard deviation (S.D.) used for calculating the proportion of a young flock that has attained photosensitivity, (2) the equation for calculating the slope of the line relating AFE to age at transfer from one photoperiod to another, (3) the equation used for estimating the distribution of AFE as a function of the mean value, (4) the point of no return when pullets which have started spontaneous maturation in response to the current photoperiod can no longer respond to a late change in photoperiod and (5) the equations used for calculating the distribution of AFE when the trait is bimodal.
Resumo:
Field experiments were conducted to quantify the natural levels of post-dispersal seed predation of arable weed species in spring barley and to identify the main groups of seed predators. Four arable weed species were investigated that were of high biodiversity value, yet of low to moderate competitive ability with the crop. These were Chenopodium album, Sinapis arvensis, Stellaria media and Polygonum aviculare. Exclusion treatments were used to allow selective access to dishes of seeds by different predator groups. Seed predation was highest early in the season, followed by a gradual decline in predation over the summer for all species. All species were taken by invertebrates. The activity of two phytophagous carabid genera showed significant correlations with seed predation levels. However, in general carabid activity was not related to seed predation and this is discussed in terms of the mainly polyphagous nature of many Carabid species that utilized the seed resource early in the season, but then switched to carnivory as prey populations increased. The potential relevance of post-dispersal seed predation to the development of weed management systems that maximize biological control through conservation and optimize herbicide use, is discussed.
Resumo:
A series of articles, many of them published in this journal, have charted the rapid spread of supermarkets in developing and middle-income countries and forecast its continuation. In this article, the level of supermarket penetration (share of the retail food market) is modelled quantitatively on a cross-section of 42 countries for which data could be obtained, representing all stages of development. GDP per capita, income distribution, urbanisation, female labour force participation and openness to inward foreign investment are all significant explanators. Projections to 2015 suggest significant but not explosive further penetration; increased openness and GDP growth are the most significant factors.
Resumo:
The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
Field experiments were conducted to quantify the natural levels of post-dispersal seed predation of arable weed species in spring barley and to identify the main groups of seed predators. Four arable weed species were investigated that were of high biodiversity value, yet of low to moderate competitive ability with the crop. These were Chenopodium album, Sinapis arvensis, Stellaria media and Polygonum aviculare. Exclusion treatments were used to allow selective access to dishes of seeds by different predator groups. Seed predation was highest early in the season, followed by a gradual decline in predation over the summer for all species. All species were taken by invertebrates. The activity of two phytophagous carabid genera showed significant correlations with seed predation levels. However, in general carabid activity was not related to seed predation and this is discussed in terms of the mainly polyphagous nature of many Carabid species that utilized the seed resource early in the season, but then switched to carnivory as prey populations increased. The potential relevance of post-dispersal seed predation to the development of weed management systems that maximize biological control through conservation and optimize herbicide use, is discussed.
Resumo:
1. Demographic models are assuming an important role in management decisions for endangered species. Elasticity analysis and scope for management analysis are two such applications. Elasticity analysis determines the vital rates that have the greatest impact on population growth. Scope for management analysis examines the effects that feasible management might have on vital rates and population growth. Both methods target management in an attempt to maximize population growth. 2. The Seychelles magpie robin Copsychus sechellarum is a critically endangered island endemic, the population of which underwent significant growth in the early 1990s following the implementation of a recovery programme. We examined how the formal use of elasticity and scope for management analyses might have shaped management in the recovery programme, and assessed their effectiveness by comparison with the actual population growth achieved. 3. The magpie robin population doubled from about 25 birds in 1990 to more than 50 by 1995. A simple two-stage demographic model showed that this growth was driven primarily by a significant increase in the annual survival probability of first-year birds and an increase in the birth rate. Neither the annual survival probability of adults nor the probability of a female breeding at age 1 changed significantly over time. 4. Elasticity analysis showed that the annual survival probability of adults had the greatest impact on population growth. There was some scope to use management to increase survival, but because survival rates were already high (> 0.9) this had a negligible effect on population growth. Scope for management analysis showed that significant population growth could have been achieved by targeting management measures at the birth rate and survival probability of first-year birds, although predicted growth rates were lower than those achieved by the recovery programme when all management measures were in place (i.e. 1992-95). 5. Synthesis and applications. We argue that scope for management analysis can provide a useful basis for management but will inevitably be limited to some extent by a lack of data, as our study shows. This means that identifying perceived ecological problems and designing management to alleviate them must be an important component of endangered species management. The corollary of this is that it will not be possible or wise to consider only management options for which there is a demonstrable ecological benefit. Given these constraints, we see little role for elasticity analysis because, when data are available, a scope for management analysis will always be of greater practical value and, when data are lacking, precautionary management demands that as many perceived ecological problems as possible are tackled.