996 resultados para yield simulation
Resumo:
The characterization of three commercial activated carbons was carried out using the adsorption of various compounds in the aqueous phase. For this purpose the generalized adsorption isotherm was employed, and a modification of the Dubinin-Radushkevich pore filling model, incorporating repulsive contributions to the pore potential as well as bulk liquid phase nonideality, was used as the local isotherm. Eight different flavor compounds were used as adsorbates, and the isotherms were jointly fitted to yield a common pore size distribution for each carbon. The bulk liquid phase nonideality was incorporated through the UNIFAC activity coefficient model, and the repulsive contribution to the pore potential was incorporated through the Steele 10-4-3 potential model. The mean micropore network coordination number for each carbon was also determined from the fitted saturation capacity based on percolation theory. Good agreement between the model and the experimental data was observed. In addition, excellent agreement between the bimodal gamma pore size distribution and density functional theory-cum-regularization-based pore size distribution obtained by argon adsorption was also observed, supporting the validity of the model. The results show that liquid phase adsorption, using adsorptive molecules of different sizes, can be an effective means of characterizing the pore size distribution as well as connectivity. Alternately, if the carbon pore size distribution is independently known, the method can be used to measure critical molecular sizes. (C) 2001 Elsevier Science.
Resumo:
We developed a general model to assess patient activity within the primary and secondary health-care sectors following a dermatology outpatient consultation. Based on observed variables from the UK teledermatology trial, the model showed that up to 11 doctor-patient interactions occurred before a patient was ultimately discharged from care. In a cohort of 1000 patients, the average number of health-care visits was 2.4 (range 1-11). Simulation analysis suggested that the most important parameter affecting the total number of doctor-patient Interactions is patient discharge from care following the initial consultation. This implies that resources should be concentrated in this area. The introduction of teledermatology (either realtime or store and forward) changes the values of the model parameters. The model provides a quantitative tool for planning the future provision of dermatology health-care.
Resumo:
Recent progress in the production, purification, and experimental and theoretical investigations of carbon nanotubes for hydrogen storage are reviewed. From the industrial point of view, the chemical vapor deposition process has shown advantages over laser ablation and electric-arc-discharge methods. The ultimate goal in nanotube synthesis should be to gain control over geometrical aspects of nanotubes, such as location and orientation, and the atomic structure of nanotubes, including helicity and diameter. There is currently no effective and simple purification procedure that fulfills all requirements for processing carbon nanotubes. Purification is still the bottleneck for technical applications, especially where large amounts of material are required. Although the alkali-metal-doped carbon nanotubes showed high H-2 Weight uptake, further investigations indicated that some of this uptake was due to water rather than hydrogen. This discovery indicates a potential source of error in evaluation of the storage capacity of doped carbon nanotubes. Nevertheless, currently available single-wall nanotubes yield a hydrogen uptake value near 4 wt% under moderate pressure and room temperature. A further 50% increase is needed to meet U.S. Department of Energy targets for commercial exploitation. Meeting this target will require combining experimental and theoretical efforts to achieve a full understanding of the adsorption process, so that the uptake can be rationally optimized to commercially attractive levels. Large-scale production and purification of carbon nanotubes and remarkable improvement of H-2 storage capacity in carbon nanotubes represent significant technological and theoretical challenges in the years to come.
Resumo:
The magnitude of genotype-by-management (G x M) interactions for grain yield and grain protein concentration was examined in a multi-environment trial (MET) involving a diverse set of 272 advanced breeding lines from the Queensland wheat breeding program. The MET was structured as a series of management-regimes imposed at 3 sites for 2 years. The management-regimes were generated at each site-year as separate trials in which planting time, N fertiliser application rate, cropping history, and irrigation were manipulated. irrigation was used to simulate different rainfall regimes. From the combined analysis of variance, the G x M interaction variance components were found to be the largest source of G x E interaction variation for both grain yield (0.117 +/- 0.005 t(2) ha(-2); 49% of total G x E 0.238 +/- 0.028 t(2) ha(-2)) and grain protein concentration (0.445 +/- 0.020%(2); 82% of total G x E 0.546 +/- 0.057%(2)), and in both cases this source of variation was larger than the genotypic variance component (grain yield 0.068 +/- 0.014 t(2) ha(-2) and grain protein 0.203 +/- 0.026%(2)). The genotypic correlation between the traits varied considerably with management-regime, ranging from -0.98 to -0.31, with an estimate of 0.0 for one trial. Pattern analysis identified advanced breeding lines with improved grain yield and grain protein concentration relative to the cultivars Hartog, Sunco and Meteor. It is likely that a large component of the previously documented G x E interactions for grain yield of wheat in the northern grains region are in part a result of G x M interactions. The implications of the strong influence of G x M interactions for the conduct of wheat breeding METs in the northern region are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The material in genebanks includes valuable traditional varieties and landraces, non-domesticated species, advanced and obsolete cultivars, breeding lines and genetic stock. It is the wide variety of potentially useful genetic diversity that makes collections valuable. While most of the yield increases to date have resulted from manipulation of a few major traits (such as height, photoperiodism, and vernalization), meeting future demand for increased yields will require exploitation of novel genetic resources. Many traits have been reported to have potential to enhance yield, and high expression of these can be found in germplasm collections. To boost yield in irrigated situations, spike fertility must be improved simultaneously with photosynthetic capacity. CIMMYT's Wheat Genetic Resources program has identified a source of multi-ovary florets, with up to 6 kernels per floret. Lines from landrace collections have been identified that have very high chlorophyll concentration, which may increase leaf photosynthetic rate. High chlorophyll concentration and high stomatal conductance are associated with heat tolerance. Recent studies, through augmented use of seed multiplication nurseries, identified high expression of these traits in bank accessions, and both traits were heritable. Searches are underway for drought tolerance traits related to remobilization of stem fructans, awn photosynthesis, osmotic adjustment, and pubescence. Genetic diversity from wild relatives through the production of synthetic wheats has produced novel genetic diversity.
Resumo:
Agricultural ecosystems and their associated business and government systems are diverse and varied. They range from farms, to input supply businesses, to marketing and government policy systems, among others. These systems are dynamic and responsive to fluctuations in climate. Skill in climate prediction offers considerable opportunities to managers via its potential to realise system improvements (i.e. increased food production and profit and/or reduced risks). Realising these opportunities, however, is not straightforward as the forecasting skill is imperfect and approaches to applying the existing skill to management issues have not been developed and tested extensively. While there has been much written about impacts of climate variability, there has been relatively little done in relation to applying knowledge of climate predictions to modify actions ahead of likely impacts. However, a considerable body of effort in various parts of the world is now being focused on this issue of applying climate predictions to improve agricultural systems. In this paper, we outline the basis for climate prediction, with emphasis on the El Nino-Southern Oscillation phenomenon, and catalogue experiences at field, national and global scales in applying climate predictions to agriculture. These diverse experiences are synthesised to derive general lessons about approaches to applying climate prediction in agriculture. The case studies have been selected to represent a diversity of agricultural systems and scales of operation. They also represent the on-going activities of some of the key research and development groups in this field around the world. The case studies include applications at field/farm scale to dryland cropping systems in Australia, Zimbabwe, and Argentina. This spectrum covers resource-rich and resource-poor farming with motivations ranging from profit to food security. At national and global scale we consider possible applications of climate prediction in commodity forecasting (wheat in Australia) and examine implications on global wheat trade and price associated with global consequences of climate prediction. In cataloguing these experiences we note some general lessons. Foremost is the value of an interdisciplinary systems approach in connecting disciplinary Knowledge in a manner most suited to decision-makers. This approach often includes scenario analysis based oil simulation with credible models as a key aspect of the learning process. Interaction among researchers, analysts and decision-makers is vital in the development of effective applications all of the players learn. Issues associated with balance between information demand and supply as well as appreciation of awareness limitations of decision-makers, analysts, and scientists are highlighted. It is argued that understanding and communicating decision risks is one of the keys to successful applications of climate prediction. We consider that advances of the future will be made by better connecting agricultural scientists and practitioners with the science of climate prediction. Professions involved in decision making must take a proactive role in the development of climate forecasts if the design and use of climate predictions are to reach their full potential. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
A two-dimensional numerical simulation model of interface states in scanning capacitance microscopy (SCM) measurements of p-n junctions is presented-In the model, amphoteric interface states with two transition energies in the Si band gap are represented as fixed charges to account for their behavior in SCM measurements. The interface states are shown to cause a stretch-out-and a parallel shift of the capacitance-voltage characteristics in the depletion. and neutral regions of p-n junctions, respectively. This explains the discrepancy between - the SCM measurement and simulation near p-n junctions, and thus modeling interface states is crucial for SCM dopant profiling of p-n junctions. (C) 2002 American Institute of Physics.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
The splitting method is a simulation technique for the estimation of very small probabilities. In this technique, the sample paths are split into multiple copies, at various stages in the simulation. Of vital importance to the efficiency of the method is the Importance Function (IF). This function governs the placement of the thresholds or surfaces at which the paths are split. We derive a characterisation of the optimal IF and show that for multi-dimensional models the natural choice for the IF is usually not optimal. We also show how nearly optimal splitting surfaces can be derived or simulated using reverse time analysis. Our numerical experiments illustrate that by using the optimal IF, one can obtain a significant improvement in simulation efficiency.