60 resultados para Scarcity of available alternatives
Resumo:
Heat-treated animal bone char (ABC) has not previously been evaluated for its potential as a phosphorus (P) fertilizer. ABC, Gafsa phosphate rock (GPR) and triple superphosphate fertilizer (TSP) were incubated in 12 soils. Dissolved-P was assessed by extraction with NaOH and bioavailability with the Olsen extractant. The rate of P dissolution from ABC was described almost equally well by the Elovich and Power equations. After 145 days, the fraction of P dissolved ranged from 0 to 73% and to 56% for ABC and GPR, respectively. The most important soil properties determining P dissolution from ABC were pH and P sorption. P dissolution was not significant at soil pH > 6.1 (ABC) and > 5 (GPR) and the lower the pH, the greater the Dissolved-P. Dissolved-P also correlated positively and significantly with inorganic P sorption, measured by the Freundlich isotherm and the P sorption index of Bache and Williams (1971). Soil pH and P sorption index could be combined in multiple regression equations that use readily measured soil properties to predict the potential for ABC dissolution in a soil. Dissolution of P from GPR correlated with soil pH and exchangeable acidity. In comparison with GPR, ABC was a better source of available P, assessed by Olsen-P. In most soils, ABC increased Olsen-P immediately after application, including soils of relatively high pH in which GPR was ineffective. ABC is a P fertilizer of solubility intermediate between GPR and TSP.
Resumo:
Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.
Resumo:
Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.
Resumo:
Answering many of the critical questions in conservation, development and environmental management requires integrating the social and natural sciences. However, understanding the array of available quantitative methods and their associated terminology presents a major barrier to successful collaboration. We provide an overview of quantitative socio-economic methods that distils their complexity into a simple taxonomy. We outline how each has been used in conjunction with ecological models to address questions relating to the management of socio-ecological systems. We review the application of social and ecological quantitative concepts to agro-ecology and classify the approaches used to integrate the two disciplines. Our review included all published integrated models from 2003 to 2008 in 27 journals that publish agricultural modelling research. Although our focus is on agro-ecology, many of the results are broadly applicable to other fields involving an interaction between human activities and ecology. We found 36 papers that integrated social and ecological concepts in a quantitative model. Four different approaches to integration were used, depending on the scale at which human welfare was quantified. Most models viewed humans as pure profit maximizers, both when calculating welfare and predicting behaviour. Synthesis and applications. We reached two main conclusions based on our taxonomy and review. The first is that quantitative methods that extend predictions of behaviour and measurements of welfare beyond a simple market value basis are underutilized by integrated models. The second is that the accuracy of prediction for integrated models remains largely unquantified. Addressing both problems requires researchers to reach a common understanding of modelling goals and data requirements during the early stages of a project.
Resumo:
Recent developments in the fields of veterinary epidemiology and economics are critically reviewed and assessed. The impacts of recent technological developments in diagnosis, genetic characterisation, data processing and statistical analysis are evaluated. It is concluded that the acquisition and availability of data remains the principal constraint to the application of available techniques in veterinary epidemiology and economics, especially at population level. As more commercial producers use computerised management systems, the availability of data for analysis within herds is improving. However, consistency of recording and diagnosis remains problematic. Recent trends to the development of national livestock databases intended to provide reassurance to consumers of the safety and traceability of livestock products are potentially valuable sources of data that could lead to much more effective application of veterinary epidemiology and economics. These opportunities will be greatly enhanced if data from different sources, such as movement recording, official animal health programmes, quality assurance schemes, production recording and breed societies can be integrated. However, in order to realise such integrated databases, it will be necessary to provide absolute control of user access to guarantee data security and confidentiality. The potential applications of integrated livestock databases in analysis, modelling, decision-support, and providing management information for veterinary services and livestock producers are discussed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
A sequential study design generally makes more efficient use of available information than a fixed sample counterpart of equal power. This feature is gradually being exploited by researchers in genetic and epidemiological investigations that utilize banked biological resources and in studies where time, cost and ethics are prominent considerations. Recent work in this area has focussed on the sequential analysis of matched case-control studies with a dichotomous trait. In this paper, we extend the sequential approach to a comparison of the associations within two independent groups of paired continuous observations. Such a comparison is particularly relevant in familial studies of phenotypic correlation using twins. We develop a sequential twin method based on the intraclass correlation and show that use of sequential methodology can lead to a substantial reduction in the number of observations without compromising the study error rates. Additionally, our approach permits straightforward allowance for other explanatory factors in the analysis. We illustrate our method in a sequential heritability study of dysplasia that allows for the effect of body mass index and compares monozygotes with pairs of singleton sisters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
Microsatellites are widely used in genetic analyses, many of which require reliable estimates of microsatellite mutation rates, yet the factors determining mutation rates are uncertain. The most straightforward and conclusive method by which to study mutation is direct observation of allele transmissions in parent-child pairs, and studies of this type suggest a positive, possibly exponential, relationship between mutation rate and allele size, together with a bias toward length increase. Except for microsatellites on the Y chromosome, however, previous analyses have not made full use of available data and may have introduced bias: mutations have been identified only where child genotypes could not be generated by transmission from parents' genotypes, so that the probability that a mutation is detected depends on the distribution of allele lengths and varies with allele length. We introduce a likelihood-based approach that has two key advantages over existing methods. First, we can make formal comparisons between competing models of microsatellite evolution; second, we obtain asymptotically unbiased and efficient parameter estimates. Application to data composed of 118,866 parent-offspring transmissions of AC microsatellites supports the hypothesis that mutation rate increases exponentially with microsatellite length, with a suggestion that contractions become more likely than expansions as length increases. This would lead to a stationary distribution for allele length maintained by mutational balance. There is no evidence that contractions and expansions differ in their step size distributions.
Resumo:
The presented study examined the opinion of in-service and prospective chemistry teachers about the importance of usage of molecular and crystal models in secondary-level school practice, and investigated some of the reasons for their (non-) usage. The majority of participants stated that the use of models plays an important role in chemistry education and that they would use them more often if the circumstances were more favourable. Many teachers claimed that three-dimensional (3d) models are still not available in sufficient number at their schools; they also pointed to the lack of available computer facilities during chemistry lessons. The research revealed that, besides the inadequate material circumstances, less than one third of participants are able to use simple (freeware) computer programs for drawing molecular structures and their presentation in virtual space; however both groups of teachers expressed the willingness to improve their knowledge in the subject area. The investigation points to several actions which could be undertaken to improve the current situation.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Temporal discounting (TD) matures with age, alongside other markers of increased impulse control, and coherent, self-regulated behaviour. Discounting paradigms quantify the ability to refrain from preference of immediate rewards, in favour of delayed, larger rewards. As such, they measure temporal foresight and the ability to delay gratification, functions that develop slowly into adulthood. We investigated the neural maturation that accompanies the previously observed age-related behavioural changes in discounting, from early adolescence into mid-adulthood. We used functional magnetic resonance imaging of a hypothetical discounting task with monetary rewards delayed in the week to year range. We show that age-related reductions in choice impulsivity were associated with changes in activation in ventromedial prefrontal cortex (vmPFC), anterior cingulate cortex (ACC), ventral striatum (VS), insula, inferior temporal gyrus, and posterior parietal cortex. Limbic frontostriatal activation changes were specifically associated with age-dependent reductions in impulsive choice, as part of a more extensive network of brain areas showing age-related changes in activation, including dorsolateral PFC, inferior parietal cortex, and subcortical areas. The maturational pattern of functional connectivity included strengthening in activation coupling between ventromedial and dorsolateral PFC, parietal and insular cortices during selection of delayed alternatives, and between vmPFC and VS during selection of immediate alternatives. We conclude that maturational mechanisms within limbic frontostriatal circuitry underlie the observed post-pubertal reductions in impulsive choice with increasing age, and that this effect is dependent on increased activation coherence within a network of areas associated with discounting behaviour and inter-temporal decision-making.
Resumo:
We present a comparative analysis of projected impacts of climate change on river runoff from two types of distributed hydrological model, a global hydrological model (GHM) and catchment-scale hydrological models (CHM). Analyses are conducted for six catchments that are global in coverage and feature strong contrasts in spatial scale as well as climatic and development conditions. These include the Liard (Canada), Mekong (SE Asia), Okavango (SW Africa), Rio Grande (Brazil), Xiangu (China) and Harper's Brook (UK). A single GHM (Mac-PDM.09) is applied to all catchments whilst different CHMs are applied for each catchment. The CHMs typically simulate water resources impacts based on a more explicit representation of catchment water resources than that available from the GHM, and the CHMs include river routing. Simulations of average annual runoff, mean monthly runoff and high (Q5) and low (Q95) monthly runoff under baseline (1961-1990) and climate change scenarios are presented. We compare the simulated runoff response of each hydrological model to (1) prescribed increases in global mean temperature from the HadCM3 climate model and (2)a prescribed increase in global-mean temperature of 2oC for seven GCMs to explore response to climate model and structural uncertainty. We find that differences in projected changes of mean annual runoff between the two types of hydrological model can be substantial for a given GCM, and they are generally larger for indicators of high and low flow. However, they are relatively small in comparison to the range of projections across the seven GCMs. Hence, for the six catchments and seven GCMs we considered, climate model structural uncertainty is greater than the uncertainty associated with the type of hydrological model applied. Moreover, shifts in the seasonal cycle of runoff with climate change are presented similarly by both hydrological models, although for some catchments the monthly timing of high and low flows differs.This implies that for studies that seek to quantify and assess the role of climate model uncertainty on catchment-scale runoff, it may be equally as feasible to apply a GHM as it is to apply a CHM, especially when climate modelling uncertainty across the range of available GCMs is as large as it currently is. Whilst the GHM is able to represent the broad climate change signal that is represented by the CHMs, we find, however, that for some catchments there are differences between GHMs and CHMs in mean annual runoff due to differences in potential evaporation estimation methods, in the representation of the seasonality of runoff, and in the magnitude of changes in extreme monthly runoff, all of which have implications for future water management issues.
Resumo:
A key idea in the study of the Atlantic meridional overturning circulation (AMOC) is that its strength is proportional to the meridional density gradient, or more precisely, to the strength of the meridional pressure gradient. A physical basis that would tell us how to estimate the relevant meridional pressure gradient locally from the density distribution in numerical ocean models to test such an idea, has been lacking however. Recently, studies of ocean energetics have suggested that the AMOC is driven by the release of available potential energy (APE) into kinetic energy (KE), and that such a conversion takes place primarily in the deep western boundary currents. In this paper, we develop an analytical description linking the western boundary current circulation below the interface separating the North Atlantic Deep Water (NADW) and Antarctic Intermediate Water (AAIW) to the shape of this interface. The simple analytical model also shows how available potential energy is converted into kinetic energy at each location, and that the strength of the transport within the western boundary current is proportional to the local meridional pressure gradient at low latitudes. The present results suggest, therefore, that the conversion rate of potential energy may provide the necessary physical basis for linking the strength of the AMOC to the meridional pressure gradient, and that this could be achieved by a detailed study of the APE to KE conversion in the western boundary current.
Resumo:
Food security is one of this century’s key global challenges. By 2050 the world will require increased crop production in order to feed its predicted 9 billion people. This must be done in the face of changing consumption patterns, the impacts of climate change and the growing scarcity of water and land. Crop production methods will also have to sustain the environment, preserve natural resources and support livelihoods of farmers and rural populations around the world. There is a pressing need for the ‘sustainable intensifi cation’ of global agriculture in which yields are increased without adverse environmental impact and without the cultivation of more land. Addressing the need to secure a food supply for the whole world requires an urgent international effort with a clear sense of long-term challenges and possibilities. Biological science, especially publicly funded science, must play a vital role in the sustainable intensifi cation of food crop production. The UK has a responsibility and the capacity to take a leading role in providing a range of scientifi c solutions to mitigate potential food shortages. This will require signifi cant funding of cross-disciplinary science for food security. The constraints on food crop production are well understood, but differ widely across regions. The availability of water and good soils are major limiting factors. Signifi cant losses in crop yields occur due to pests, diseases and weed competition. The effects of climate change will further exacerbate the stresses on crop plants, potentially leading to dramatic yield reductions. Maintaining and enhancing the diversity of crop genetic resources is vital to facilitate crop breeding and thereby enhance the resilience of food crop production. Addressing these constraints requires technologies and approaches that are underpinned by good science. Some of these technologies build on existing knowledge, while others are completely radical approaches, drawing on genomics and high-throughput analysis. Novel research methods have the potential to contribute to food crop production through both genetic improvement of crops and new crop and soil management practices. Genetic improvements to crops can occur through breeding or genetic modifi cation to introduce a range of desirable traits. The application of genetic methods has the potential to refi ne existing crops and provide incremental improvements. These methods also have the potential to introduce radical and highly signifi cant improvements to crops by increasing photosynthetic effi ciency, reducing the need for nitrogen or other fertilisers and unlocking some of the unrealised potential of crop genomes. The science of crop management and agricultural practice also needs to be given particular emphasis as part of a food security grand challenge. These approaches can address key constraints in existing crop varieties and can be applied widely. Current approaches to maximising production within agricultural systems are unsustainable; new methodologies that utilise all elements of the agricultural system are needed, including better soil management and enhancement and exploitation of populations of benefi cial soil microbes. Agronomy, soil science and agroecology—the relevant sciences—have been neglected in recent years. Past debates about the use of new technologies for agriculture have tended to adopt an either/or approach, emphasising the merits of particular agricultural systems or technological approaches and the downsides of others. This has been seen most obviously with respect to genetically modifi ed (GM) crops, the use of pesticides and the arguments for and against organic modes of production. These debates have failed to acknowledge that there is no technological panacea for the global challenge of sustainable and secure global food production. There will always be trade-offs and local complexities. This report considers both new crop varieties and appropriate agroecological crop and soil management practices and adopts an inclusive approach. No techniques or technologies should be ruled out. Global agriculture demands a diversity of approaches, specific to crops, localities, cultures and other circumstances. Such diversity demands that the breadth of relevant scientific enquiry is equally diverse, and that science needs to be combined with social, economic and political perspectives. In addition to supporting high-quality science, the UK needs to maintain and build its capacity to innovate, in collaboration with international and national research centres. UK scientists and agronomists have in the past played a leading role in disciplines relevant to agriculture, but training in agricultural sciences and related topics has recently suffered from a lack of policy attention and support. Agricultural extension services, connecting farmers with new innovations, have been similarly neglected in the UK and elsewhere. There is a major need to review the support for and provision of extension services, particularly in developing countries. The governance of innovation for agriculture needs to maximise opportunities for increasing production, while at the same time protecting societies, economies and the environment from negative side effects. Regulatory systems need to improve their assessment of benefits. Horizon scanning will ensure proactive consideration of technological options by governments. Assessment of benefi ts, risks and uncertainties should be seen broadly, and should include the wider impacts of new technologies and practices on economies and societies. Public and stakeholder dialogue—with NGOs, scientists and farmers in particular—needs to be a part of all governance frameworks.
Resumo:
Scintillometry is an established technique for determining large areal average sensible heat fluxes. The scintillometer measurement is related to sensible heat flux via Monin–Obukhov similarity theory, which was developed for ideal homogeneous land surfaces. In this study it is shown that judicious application of scintillometry over heterogeneous mixed agriculture on undulating topography yields valid results when compared to eddy covariance (EC). A large aperture scintillometer (LAS) over a 2.4 km path was compared with four EC stations measuring sensible (H) and latent (LvE) heat fluxes over different vegetation (cereals and grass) which when aggregated were representative of the LAS source area. The partitioning of available energy into H and LvE varied strongly for different vegetation types, with H varying by a factor of three between senesced winter wheat and grass pasture. The LAS derived H agrees (one-to-one within the experimental uncertainty) with H aggregated from EC with a high coefficient of determination of 0.94. Chronological analysis shows individual fields may have a varying contribution to the areal average sensible heat flux on short (weekly) time scales due to phenological development and changing soil moisture conditions. Using spatially aggregated measurements of net radiation and soil heat flux with H from the LAS, the areal averaged latent heat flux (LvELAS) was calculated as the residual of the surface energy balance. The regression of LvELAS against aggregated LvE from the EC stations has a slope of 0.94, close to ideal, and demonstrates that this is an accurate method for the landscape-scale estimation of evaporation over heterogeneous complex topography.
Resumo:
CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.