105 resultados para yield simulation
Resumo:
The material in genebanks includes valuable traditional varieties and landraces, non-domesticated species, advanced and obsolete cultivars, breeding lines and genetic stock. It is the wide variety of potentially useful genetic diversity that makes collections valuable. While most of the yield increases to date have resulted from manipulation of a few major traits (such as height, photoperiodism, and vernalization), meeting future demand for increased yields will require exploitation of novel genetic resources. Many traits have been reported to have potential to enhance yield, and high expression of these can be found in germplasm collections. To boost yield in irrigated situations, spike fertility must be improved simultaneously with photosynthetic capacity. CIMMYT's Wheat Genetic Resources program has identified a source of multi-ovary florets, with up to 6 kernels per floret. Lines from landrace collections have been identified that have very high chlorophyll concentration, which may increase leaf photosynthetic rate. High chlorophyll concentration and high stomatal conductance are associated with heat tolerance. Recent studies, through augmented use of seed multiplication nurseries, identified high expression of these traits in bank accessions, and both traits were heritable. Searches are underway for drought tolerance traits related to remobilization of stem fructans, awn photosynthesis, osmotic adjustment, and pubescence. Genetic diversity from wild relatives through the production of synthetic wheats has produced novel genetic diversity.
Resumo:
Agricultural ecosystems and their associated business and government systems are diverse and varied. They range from farms, to input supply businesses, to marketing and government policy systems, among others. These systems are dynamic and responsive to fluctuations in climate. Skill in climate prediction offers considerable opportunities to managers via its potential to realise system improvements (i.e. increased food production and profit and/or reduced risks). Realising these opportunities, however, is not straightforward as the forecasting skill is imperfect and approaches to applying the existing skill to management issues have not been developed and tested extensively. While there has been much written about impacts of climate variability, there has been relatively little done in relation to applying knowledge of climate predictions to modify actions ahead of likely impacts. However, a considerable body of effort in various parts of the world is now being focused on this issue of applying climate predictions to improve agricultural systems. In this paper, we outline the basis for climate prediction, with emphasis on the El Nino-Southern Oscillation phenomenon, and catalogue experiences at field, national and global scales in applying climate predictions to agriculture. These diverse experiences are synthesised to derive general lessons about approaches to applying climate prediction in agriculture. The case studies have been selected to represent a diversity of agricultural systems and scales of operation. They also represent the on-going activities of some of the key research and development groups in this field around the world. The case studies include applications at field/farm scale to dryland cropping systems in Australia, Zimbabwe, and Argentina. This spectrum covers resource-rich and resource-poor farming with motivations ranging from profit to food security. At national and global scale we consider possible applications of climate prediction in commodity forecasting (wheat in Australia) and examine implications on global wheat trade and price associated with global consequences of climate prediction. In cataloguing these experiences we note some general lessons. Foremost is the value of an interdisciplinary systems approach in connecting disciplinary Knowledge in a manner most suited to decision-makers. This approach often includes scenario analysis based oil simulation with credible models as a key aspect of the learning process. Interaction among researchers, analysts and decision-makers is vital in the development of effective applications all of the players learn. Issues associated with balance between information demand and supply as well as appreciation of awareness limitations of decision-makers, analysts, and scientists are highlighted. It is argued that understanding and communicating decision risks is one of the keys to successful applications of climate prediction. We consider that advances of the future will be made by better connecting agricultural scientists and practitioners with the science of climate prediction. Professions involved in decision making must take a proactive role in the development of climate forecasts if the design and use of climate predictions are to reach their full potential. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
A two-dimensional numerical simulation model of interface states in scanning capacitance microscopy (SCM) measurements of p-n junctions is presented-In the model, amphoteric interface states with two transition energies in the Si band gap are represented as fixed charges to account for their behavior in SCM measurements. The interface states are shown to cause a stretch-out-and a parallel shift of the capacitance-voltage characteristics in the depletion. and neutral regions of p-n junctions, respectively. This explains the discrepancy between - the SCM measurement and simulation near p-n junctions, and thus modeling interface states is crucial for SCM dopant profiling of p-n junctions. (C) 2002 American Institute of Physics.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
The splitting method is a simulation technique for the estimation of very small probabilities. In this technique, the sample paths are split into multiple copies, at various stages in the simulation. Of vital importance to the efficiency of the method is the Importance Function (IF). This function governs the placement of the thresholds or surfaces at which the paths are split. We derive a characterisation of the optimal IF and show that for multi-dimensional models the natural choice for the IF is usually not optimal. We also show how nearly optimal splitting surfaces can be derived or simulated using reverse time analysis. Our numerical experiments illustrate that by using the optimal IF, one can obtain a significant improvement in simulation efficiency.
Resumo:
We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.
Resumo:
This article proposes a more accurate approach to dopant extraction using combined inverse modeling and forward simulation of scanning capacitance microscopy (SCM) measurements on p-n junctions. The approach takes into account the essential physics of minority carrier response to the SCM probe tip in the presence of lateral electric fields due to a p-n junction. The effects of oxide fixed charge and interface state densities in the grown oxide layer on the p-n junction samples were considered in the proposed method. The extracted metallurgical and electrical junctions were compared to the apparent electrical junction obtained from SCM measurements. (C) 2002 American Institute of Physics.
Resumo:
Land use intensification is estimated to result in an overall increase in sediment delivery to the Great Barrier Reef lagoon by a factor of approximately four. Modelling suggests that, following land use intensification, croplands cause the greatest increase of sediment yield and sediment concentration, whereas erosion of grazing land is the main contemporary source of sediments, primarily owing to the large spatial extent of this land use. The spatial pattern of sediment yield to the coast after land use intensification is strongly correlated with the pattern under natural conditions, although the greatest increase is estimated to have occurred in the wet-dry catchments. Sediment transport and resuspension processes have led to the development of a strongly sediment-partitioned shelf, with modern mud-rich sediments almost exclusively restricted to the inner and inner-middle shelf, northward-facing embayments and in the lee of headlands. Elevated sediment concentrations increase the potential transport rates of nutrients and other pollutants. Whether increased sediment supply to the coastal zone has impacted on reefs remains a point of contention. More sediment load data need to be collected and analysed in order to make detailed estimates of catchment yields and establish the possible sediment impact on the Great Barrier Reef.
Resumo:
The performance of the Oxford University Gun Tunnel has been estimated using a quasi-one-dimensional simulation of the facility gas dynamics. The modelling of the actual facility area variations so as to adequately simulate both shock reflection and flow discharge processes has been considered in some detail. Test gas stagnation pressure and temperature histories are compared with measurements at two different operating conditions - one with nitrogen and the other with carbon dioxide as the test gas. It is demonstrated that both the simulated pressures and temperatures are typically within 3% of the experimental measurements.