143 resultados para Scale not given.None
Resumo:
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multicriteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, rye-grass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
Resumo:
A limitation of small-scale dairy systems in central Mexico is that traditional feeding strategies are less effective when nutrient availability varies through the year. In the present work, a linear programming (LP) model that maximizes income over feed cost was developed, and used to evaluate two strategies: the traditional one used by the small-scale dairy producers in Michoacan State, based on fresh lucerne, maize grain and maize straw; and an alternative strategy proposed by the LIP model, based on ryegrass hay, maize silage and maize grain. Biological and economic efficiency for both strategies were evaluated. Results obtained with the traditional strategy agree with previously published work. The alternative strategy did not improve upon the performance of the traditional strategy because of low metabolizable protein content of the maize silage considered by the model. However, the Study recommends improvement of forage quality to increase the efficiency of small-scale dairy systems, rather than looking for concentrate supplementation.
Resumo:
Agri-environment schemes (AES) are widely used policy instruments intended to combat widespread biodiversity declines across agricultural landscapes. Here, using a light trapping and mark-release-recapture study at a field-scale on nine common and widespread larger moth species, we investigate the effect of wide field margins (a popular current scheme option) and the presence of hedgerow trees (a potential scheme option in England) on moth abundance. Of these, we show that wide field margins positively affected abundances, although species did not all respond in the same way. We demonstrate that this variation can be attributed to species-specific mobility characteristics. Those species for which the effect of wide margins was strongest covered shorter distances, and were more frequently recaptured at their site of first capture. This demonstrates that the standard, field-scale uptake of AES may be effective only for less mobile species. We discuss that a landscape-scale approach, in contrast, could deliver significant biodiversity gains, as our results indicate that such an approach (perhaps delivered through targeting farmers to join AES) would be effective for the majority of wider countryside species, irrespective of their mobility level. (C) 2008 Elsevier B.V. All rights reserved.
High throughput, high resolution selection of polymorphic microsatellite loci for multiplex analysis
Resumo:
Background Large-scale genetic profiling, mapping and genetic association studies require access to a series of well-characterised and polymorphic microsatellite markers with distinct and broad allele ranges. Selection of complementary microsatellite markers with non-overlapping allele ranges has historically proved to be a bottleneck in the development of multiplex microsatellite assays. The characterisation process for each microsatellite locus can be laborious and costly given the need for numerous, locus-specific fluorescent primers. Results Here, we describe a simple and inexpensive approach to select useful microsatellite markers. The system is based on the pooling of multiple unlabelled PCR amplicons and their subsequent ligation into a standard cloning vector. A second round of amplification utilising generic labelled primers targeting the vector and unlabelled locus-specific primers targeting the microsatellite flanking region yield allelic profiles that are representative of all individuals contained within the pool. Suitability of various DNA pool sizes was then tested for this purpose. DNA template pools containing between 8 and 96 individuals were assessed for the determination of allele ranges of individual microsatellite markers across a broad population. This helped resolve the balance between using pools that are large enough to allow the detection of many alleles against the risk of including too many individuals in a pool such that rare alleles are over-diluted and so do not appear in the pooled microsatellite profile. Pools of DNA from 12 individuals allowed the reliable detection of all alleles present in the pool. Conclusion The use of generic vector-specific fluorescent primers and unlabelled locus-specific primers provides a high resolution, rapid and inexpensive approach for the selection of highly polymorphic microsatellite loci that possess non-overlapping allele ranges for use in large-scale multiplex assays.
Resumo:
In previous empirical and modelling studies of rare species and weeds, evidence of fractal behaviour has been found. We propose that weeds in modern agricultural systems may be managed close to critical population dynamic thresholds, below which their rates of increase will be negative and where scale-invariance may be expected as a consequence. We collected detailed spatial data on five contrasting species over a period of three years in a primarily arable field. Counts in 20×20 cm contiguous quadrats, 225,000 in 1998 and 84,375 thereafter, could be re-structured into a wide range of larger quadrat sizes. These were analysed using three methods based on correlation sum, incidence and conditional incidence. We found non-trivial scale invariance for species occurring at low mean densities and where they were strongly aggregated. The fact that the scale-invariance was not found for widespread species occurring at higher densities suggests that the scaling in agricultural weed populations may, indeed, be related to critical phenomena.
Resumo:
Geological carbon dioxide storage (CCS) has the potential to make a significant contribution to the decarbonisation of the UK. Amid concerns over maintaining security, and hence diversity, of supply, CCS could allow the continued use of coal, oil and gas whilst avoiding the CO2 emissions currently associated with fossil fuel use. This project has explored some of the geological, environmental, technical, economic and social implications of this technology. The UK is well placed to exploit CCS with a large offshore storage capacity, both in disused oil and gas fields and saline aquifers. This capacity should be sufficient to store CO2 from the power sector (at current levels) for a least one century, using well understood and therefore likely to be lower-risk, depleted hydrocarbon fields and contained parts of aquifers. It is very difficult to produce reliable estimates of the (potentially much larger) storage capacity of the less well understood geological reservoirs such as non-confined parts of aquifers. With the majority of its large coal fired power stations due to be retired during the next 15 to 20 years, the UK is at a natural decision point with respect to the future of power generation from coal; the existence of both national reserves and the infrastructure for receiving imported coal makes clean coal technology a realistic option. The notion of CCS as a ‘bridging’ or ‘stop-gap’ technology (i.e. whilst we develop ‘genuinely’ sustainable renewable energy technologies) needs to be examined somewhat critically, especially given the scale of global coal reserves. If CCS plant is built, then it is likely that technological innovation will bring down the costs of CO2 capture, such that it could become increasingly attractive. As with any capitalintensive option, there is a danger of becoming ‘locked-in’ to a CCS system. The costs of CCS in our model for UK power stations in the East Midlands and Yorkshire to reservoirs in the North Sea are between £25 and £60 per tonne of CO2 captured, transported and stored. This is between about 2 and 4 times the current traded price of a tonne of CO2 in the EU Emissions Trading Scheme. In addition to the technical and economic requirements of the CCS technology, it should also be socially and environmentally acceptable. Our research has shown that, given an acceptance of the severity and urgency of addressing climate change, CCS is viewed favourably by members of the public, provided it is adopted within a portfolio of other measures. The most commonly voiced concern from the public is that of leakage and this remains perhaps the greatest uncertainty with CCS. It is not possible to make general statements concerning storage security; assessments must be site specific. The impacts of any potential leakage are also somewhat uncertain but should be balanced against the deleterious effects of increased acidification in the oceans due to uptake of elevated atmospheric CO2 that have already been observed. Provided adequate long term monitoring can be ensured, any leakage of CO2 from a storage site is likely to have minimal localised impacts as long as leaks are rapidly repaired. A regulatory framework for CCS will need to include risk assessment of potential environmental and health and safety impacts, accounting and monitoring and liability for the long term. In summary, although there remain uncertainties to be resolved through research and demonstration projects, our assessment demonstrates that CCS holds great potential for significant cuts in CO2 emissions as we develop long term alternatives to fossil fuel use. CCS can contribute to reducing emissions of CO2 into the atmosphere in the near term (i.e. peak-shaving the future atmospheric concentration of CO2), with the potential to continue to deliver significant CO2 reductions over the long term.
Resumo:
Oxidized low-density lipoprotein (oxLDL) exhibits many atherogenic effects, including the promotion of monocyte recruitment to the arterial endothelium and the induction of scavenger receptor expression. However, while atherosclerosis involves chronic inflammation within the arterial intima, it is unclear whether oxLDL alone provides a direct inflammatory stimulus for monocyte-macrophages. Furthermore, oxLDL is not a single, well-defined entity, but has structural and physical properties which vary according to the degree of oxidation. We tested the hypothesis that the biological effects of oxLDL will vary according to its degree of oxidation and that some species of oxLDL will have atherogenic properties, while other species may be responsible for its inflammatory activity. The atherogenic and inflammatory properties of LDL oxidized to predetermined degrees (mild, moderate and extensive oxidation) were investigated in a single system using human monocyte-derived macrophages. Expression of CD36 mRNA was up-regulated by mildly- and moderately-oxLDL, but not highly-oxLDL. The expression of the transcription factor, proliferator-activated receptor-gamma (PPARgamma), which has been proposed to positively regulate the expression of CD36, was increased to the greatest degree by highly-oxLDL. However, the DNA binding activity of PPARgamma was increased only by mildly- and moderately-oxLDL. None of the oxLDL species appeared to be pro-inflammatory towards monocytes, either directly or indirectly through mediators derived from lymphocytes, regardless of the degree of oxidation. (C) 2003 Published by Elsevier Science Ireland Ltd.
Resumo:
Given the paucity of research in this area, the primary aim of this study was to explore how parents of infants with unclear sex at birth made sense of 'intersex'. Qualitative methods were, used (semi-structured interviews, interpretative phenomenological analysis) with 10 parents to generate pertinent themes and provide ideas for further research. Our analysis highlights the fundamental shock engendered by the uncertain sex status of children, and documents parental struggles to negotiate a coherent sex identity for their children. Findings are discussed in light of the rigid two-sex system which pervades medicine and everyday life, and we argue that greater understanding of the complexity of sex and gender is required in order to facilitate better service provision and, ultimately, greater informed consent and parental participation regarding decisions about their children's status.
Resumo:
Patients want and need comprehensive and accurate information about their medicines so that they can participate in decisions about their healthcare: In particular, they require information about the likely risks and benefits that are associated with the different treatment options. However, to provide this information in a form that people can readily understand and use is a considerable challenge to healthcare professionals. One recent attempt to standardise the Language of risk has been to produce sets of verbal descriptors that correspond to specific probability ranges, such as those outlined in the European Commission (EC) Pharmaceutical Committee guidelines in 1998 for describing the incidence of adverse effects. This paper provides an overview of a number of studies involving members of the general public, patients, and hospital doctors, that evaluated the utility of the EC guideline descriptors (very common, common, uncommon, rare, very rare). In all studies it was found that people significantly over-estimated the likelihood of adverse effects occurring, given specific verbal descriptors. This in turn resulted in significantly higher ratings of their perceived risks to health and significantly lower ratings of their likelihood of taking the medicine. Such problems of interpretation are not restricted to the EC guideline descriptors. Similar levels of misinterpretation have also been demonstrated with two other recently advocated risk scales (Caiman's verbal descriptor scale and Barclay, Costigan and Davies' lottery scale). In conclusion, the challenge for risk communicators and for future research will be to produce a language of risk that is sufficiently flexible to take into account different perspectives, as well as changing circumstances and contexts of illness and its treatments. In the meantime, we urge the EC and other legislative bodies to stop recommending the use of specific verbal labels or phrases until there is a stronger evidence base to support their use.
Resumo:
Objectives: To examine doctors' (Experiment 1) and doctors' and lay people's (Experiment 2) interpretations of two sets of recommended verbal labels for conveying information about side effects incidence rates. Method: Both studies used a controlled empirical methodology in which participants were presented with a hypothetical, but realistic, scenario involving a prescribed medication that was said to be associated with either mild or severe side effects. The probability of each side effect was described using one of the five descriptors advocated by the European Union (Experiment 1) or one of the six descriptors advocated in Calman's risk scale (Experiment 2), and study participants were required to estimate (numerically) the probability of each side effect occurring. Key findings: Experiment 1 showed that the doctors significantly overestimated the risk of side effects occurring when interpreting the five EU descriptors, compared with the assigned probability ranges. Experiment 2 showed that both groups significantly overestimated risk when given the six Calman descriptors, although the degree of overestimation was not as great for the doctors as for the lay people. Conclusion: On the basis of our findings, we argue that we are still a long way from achieving a standardised language of risk for use by both professionals and the general public, although there might be more potential for use of standardised terms among professionals. In the meantime, the EU and other regulatory bodies and health professionals should be very cautious about advocating the use of particular verbal labels for describing medication side effects.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Evidence is presented of widespread changes in structure and species composition between the 1980s and 2003–2004 from surveys of 249 British broadleaved woodlands. Structural components examined include canopy cover, vertical vegetation profiles, field-layer cover and deadwood abundance. Woods were located in 13 geographical localities and the patterns of change were examined for each locality as well as across all woods. Changes were not uniform throughout the localities; overall, there were significant decreases in canopy cover and increases in sub-canopy (2–10 m) cover. Changes in 0.5–2 m vegetation cover showed strong geographic patterns, increasing in western localities, but declining or showing no change in eastern localities. There were significant increases in canopy ash Fraxinus excelsior and decreases in oak Quercus robur/petraea. Shrub layer ash and honeysuckle Lonicera periclymenum increased while birch Betula spp. hawthorn Crataegus monogyna and hazel Corylus avellana declined. Within the field layer, both bracken Pteridium aquilinum and herbs increased. Overall, deadwood generally increased. Changes were consistent with reductions in active woodland management and changes in grazing and browsing pressure. These findings have important implications for sustainable active management of British broadleaved woodlands to meet silvicultural and biodiversity objectives.
Resumo:
The signal transduction pathways that mediate the cardioprotective effects of ischemic preconditioning remain unclear. Here we have determined the role of a novel kinase, protein kinase D (PKD), in mediating preconditioning in the rat heart. Isolated rat hearts (n=6/group) were subjected to either: (i) 36 min aerobic perfusion (control); (ii) 20 min aerobic perfusion plus 3 min no-flow ischemia, 3 min reperfusion, 5 min no-flow ischemia, 5 min reperfusion (ischemic preconditioning); (iii) 20 min aerobic perfusion plus 200 nmol/l phorbol 12-myristate 13-acetate (PMA) given as a substitute for ischemic preconditioning. The left ventricle then was excised, homogenized and PKD immunoprecipitated from the homogenate. Activity of the purified kinase was determined following bincubation with [γ32P]-ATP±syntide-2, a substrate for PKD. Significant PKD autophosphorylation and syntide-2 phosphorylation occurred in PMA-treated hearts, but not in control or preconditioned hearts. Additional studies confirmed that recovery of LVDP was greater and initiation of ischemic contracture and time-to-peak contracture were less, in ischemic preconditioned hearts compared with controls (P<0.05). Our results suggest that the early events that mediate ischemic preconditioning in the rat heart occur via a PKD-independent mechanism.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
The transport sector emits a wide variety of gases and aerosols, with distinctly different characteristics which influence climate directly and indirectly via chemical and physical processes. Tools that allow these emissions to be placed on some kind of common scale in terms of their impact on climate have a number of possible uses such as: in agreements and emission trading schemes; when considering potential trade-offs between changes in emissions resulting from technological or operational developments; and/or for comparing the impact of different environmental impacts of transport activities. Many of the non-CO2 emissions from the transport sector are short-lived substances, not currently covered by the Kyoto Protocol. There are formidable difficulties in developing metrics and these are particularly acute for such short-lived species. One difficulty concerns the choice of an appropriate structure for the metric (which may depend on, for example, the design of any climate policy it is intended to serve) and the associated value judgements on the appropriate time periods to consider; these choices affect the perception of the relative importance of short- and long-lived species. A second difficulty is the quantification of input parameters (due to underlying uncertainty in atmospheric processes). In addition, for some transport-related emissions, the values of metrics (unlike the gases included in the Kyoto Protocol) depend on where and when the emissions are introduced into the atmosphere – both the regional distribution and, for aircraft, the distribution as a function of altitude, are important. In this assessment of such metrics, we present Global Warming Potentials (GWPs) as these have traditionally been used in the implementation of climate policy. We also present Global Temperature Change Potentials (GTPs) as an alternative metric, as this, or a similar metric may be more appropriate for use in some circumstances. We use radiative forcings and lifetimes from the literature to derive GWPs and GTPs for the main transport-related emissions, and discuss the uncertainties in these estimates. We find large variations in metric (GWP and GTP) values for NOx, mainly due to the dependence on location of emissions but also because of inter-model differences and differences in experimental design. For aerosols we give only global-mean values due to an inconsistent picture amongst available studies regarding regional dependence. The uncertainty in the presented metric values reflects the current state of understanding; the ranking of the various components with respect to our confidence in the given metric values is also given. While the focus is mostly on metrics for comparing the climate impact of emissions, many of the issues are equally relevant for stratospheric ozone depletion metrics, which are also discussed.