916 resultados para 070105 Agricultural Systems Analysis and Modelling
Resumo:
Agricultural systems models worldwide are increasingly being used to explore options and solutions for the food security, climate change adaptation and mitigation and carbon trading problem domains. APSIM (Agricultural Production Systems sIMulator) is one such model that continues to be applied and adapted to this challenging research agenda. From its inception twenty years ago, APSIM has evolved into a framework containing many of the key models required to explore changes in agricultural landscapes with capability ranging from simulation of gene expression through to multi-field farms and beyond. Keating et al. (2003) described many of the fundamental attributes of APSIM in detail. Much has changed in the last decade, and the APSIM community has been exploring novel scientific domains and utilising software developments in social media, web and mobile applications to provide simulation tools adapted to new demands. This paper updates the earlier work by Keating et al. (2003) and chronicles the changing external challenges and opportunities being placed on APSIM during the last decade. It also explores and discusses how APSIM has been evolving to a “next generation” framework with improved features and capabilities that allow its use in many diverse topics.
Resumo:
An enterprise is viewed as a complex system which can be engineered to accomplish organisational objectives. Systems analysis and modelling will enable to the planning and development of the enterprise and IT systems. Many IT systems design methods focus on functional and non-functional requirements of the IT systems. Most methods are normally capable of one but leave out other aspects. Analysing and modelling of both business and IT systems may often have to call on techniques from various suites of methods which may be placed on different philosophic and methodological underpinnings. Coherence and consistency between the analyses are hard to ensure. This paper introduces the Problem Articulation Method (PAM) which facilitates the design of an enterprise system infrastructure on which an IT system is built. Outcomes of this analysis represent requirements which can be further used for planning and designing a technical system. As a case study, a finance system, Agresso, for e-procurement has been used in this paper to illustrate the applicability of PAM in modelling complex systems.
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
In this study, the oceanic regions that are associated with anomalous Ethiopian summer rains were identified and the teleconnection mechanisms that give rise to these associations have been investigated. Because of the complexities of rainfall climate in the horn of Africa, Ethiopia has been subdivided into six homogeneous rainfall zones and the influence of SST anomalies was analysed separately for each zone. The investigation made use of composite analysis and modelling experiments. Two sets of composites of atmospheric fields were generated, one based on excess/deficit rainfall anomalies and the other based on warm/cold SST anomalies in specific oceanic regions. The aim of the composite analysis was to determine the link between SST and rainfall in terms of large scale features. The modelling experiments were intended to explore the causality of these linkage. The results show that the equatorial Pacific, the midlatitude northwest Pacific and the Gulf of Guinea all exert an influence on the summer rainfall in various part of the country. The results demonstrate that different mechanisms linked to sea surface temperature control variations in rainfall in different parts of Ethiopia. This has important consequences for seasonal forecasting models which are based on statistical correlations between SST and seasonal rainfall totals. It is clear that such statistical models should take account of the local variations in teleconnections.
Resumo:
The financial health of beef cattle enterprises in northern Australia has declined markedly over the last decade due to an escalation in production and marketing costs and a real decline in beef prices. Historically, gains in animal productivity have offset the effect of declining terms of trade on farm incomes. This raises the question of whether future productivity improvements can remain a key path for lifting enterprise profitability sufficient to ensure that the industry remains economically viable over the longer term. The key objective of this study was to assess the production and financial implications for north Australian beef enterprises of a range of technology interventions (development scenarios), including genetic gain in cattle, nutrient supplementation, and alteration of the feed base through introduced pastures and forage crops, across a variety of natural environments. To achieve this objective a beef systems model was developed that is capable of simulating livestock production at the enterprise level, including reproduction, growth and mortality, based on energy and protein supply from natural C4 pastures that are subject to high inter-annual climate variability. Comparisons between simulation outputs and enterprise performance data in three case study regions suggested that the simulation model (the Northern Australia Beef Systems Analyser) can adequately represent the performance beef cattle enterprises in northern Australia. Testing of a range of development scenarios suggested that the application of individual technologies can substantially lift productivity and profitability, especially where the entire feedbase was altered through legume augmentation. The simultaneous implementation of multiple technologies that provide benefits to different aspects of animal productivity resulted in the greatest increases in cattle productivity and enterprise profitability, with projected weaning rates increasing by 25%, liveweight gain by 40% and net profit by 150% above current baseline levels, although gains of this magnitude might not necessarily be realised in practice. While there were slight increases in total methane output from these development scenarios, the methane emissions per kg of beef produced were reduced by 20% in scenarios with higher productivity gain. Combinations of technologies or innovative practices applied in a systematic and integrated fashion thus offer scope for providing the productivity and profitability gains necessary to maintain viable beef enterprises in northern Australia into the future.
Resumo:
Nitrate leaching (NL) is an important N loss process in irrigated agriculture that imposes a cost on the farmer and the environment. A meta-analysis of published experimental results from agricultural irrigated systems was conducted to identify those strategies that have proven effective at reducing NL and to quantify the scale of reduction that can be achieved. Forty-four scientific articles were identified which investigated four main strategies (water and fertilizer management, use of cover crops and fertilizer technology) creating a database with 279 observations on NL and 166 on crop yield. Management practices that adjust water application to crop needs reduced NL by a mean of 80% without a reduction in crop yield. Improved fertilizer management reduced NL by 40%, and the best relationship between yield and NL was obtained when applying the recommended fertilizer rate. Replacing a fallow with a non-legume cover crop reduced NL by 50% while using a legume did not have any effect on NL. Improved fertilizer technology also decreased NL but was the least effective of the selected strategies. The risk of nitrate leaching from irrigated systems is high, but optimum management practices may mitigate this risk and maintain crop yields while enhancing environmental sustainability.
Resumo:
The lead author, Nimai Senapati (Post doc), was funded by the European community’s Seventh Framework programme (FP2012-2015) under grant agreement no. 262060 (ExpeER). The research leading to these results has received funding principally from the ANR (ANR-11-INBS-0001), AllEnvi, CNRS-INSU. We would like to thank the National Research Infrastructure ‘Agro-écosystèmes, Cycles Biogéochimique et Biodiversité (SOERE-ACBB http://www.soere-acbb.com/fr/) for their support in field experiment. We are deeply indebted to Christophe deBerranger, Xavier Charrier for their substantial technical assistance and Patricia Laville for her valuables suggestion regarding N2O flux estimation.
Resumo:
Multicarrier code division multiple access (MC-CDMA) is a very promising candidate for the multiple access scheme in fourth generation wireless communi- cation systems. During asynchronous transmission, multiple access interference (MAI) is a major challenge for MC-CDMA systems and significantly affects their performance. The main objectives of this thesis are to analyze the MAI in asyn- chronous MC-CDMA, and to develop robust techniques to reduce the MAI effect. Focus is first on the statistical analysis of MAI in asynchronous MC-CDMA. A new statistical model of MAI is developed. In the new model, the derivation of MAI can be applied to different distributions of timing offset, and the MAI power is modelled as a Gamma distributed random variable. By applying the new statistical model of MAI, a new computer simulation model is proposed. This model is based on the modelling of a multiuser system as a single user system followed by an additive noise component representing the MAI, which enables the new simulation model to significantly reduce the computation load during computer simulations. MAI reduction using slow frequency hopping (SFH) technique is the topic of the second part of the thesis. Two subsystems are considered. The first sub- system involves subcarrier frequency hopping as a group, which is referred to as GSFH/MC-CDMA. In the second subsystem, the condition of group hopping is dropped, resulting in a more general system, namely individual subcarrier frequency hopping MC-CDMA (ISFH/MC-CDMA). This research found that with the introduction of SFH, both of GSFH/MC-CDMA and ISFH/MC-CDMA sys- tems generate less MAI power than the basic MC-CDMA system during asyn- chronous transmission. Because of this, both SFH systems are shown to outper- form MC-CDMA in terms of BER. This improvement, however, is at the expense of spectral widening. In the third part of this thesis, base station polarization diversity, as another MAI reduction technique, is introduced to asynchronous MC-CDMA. The com- bined system is referred to as Pol/MC-CDMA. In this part a new optimum com- bining technique namely maximal signal-to-MAI ratio combining (MSMAIRC) is proposed to combine the signals in two base station antennas. With the applica- tion of MSMAIRC and in the absents of additive white Gaussian noise (AWGN), the resulting signal-to-MAI ratio (SMAIR) is not only maximized but also in- dependent of cross polarization discrimination (XPD) and antenna angle. In the case when AWGN is present, the performance of MSMAIRC is still affected by the XPD and antenna angle, but to a much lesser degree than the traditional maximal ratio combining (MRC). Furthermore, this research found that the BER performance for Pol/MC-CDMA can be further improved by changing the angle between the two receiving antennas. Hence the optimum antenna angles for both MSMAIRC and MRC are derived and their effects on the BER performance are compared. With the derived optimum antenna angle, the Pol/MC-CDMA system is able to obtain the lowest BER for a given XPD.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
Provision of network infrastructure to meet rising network peak demand is increasing the cost of electricity. Addressing this demand is a major imperative for Australian electricity agencies. The network peak demand model reported in this paper provides a quantified decision support tool and a means of understanding the key influences and impacts on network peak demand. An investigation of the system factors impacting residential consumers’ peak demand for electricity was undertaken in Queensland, Australia. Technical factors, such as the customers’ location, housing construction and appliances, were combined with social factors, such as household demographics, culture, trust and knowledge, and Change Management Options (CMOs) such as tariffs, price,managed supply, etc., in a conceptual ‘map’ of the system. A Bayesian network was used to quantify the model and provide insights into the major influential factors and their interactions. The model was also used to examine the reduction in network peak demand with different market-based and government interventions in various customer locations of interest and investigate the relative importance of instituting programs that build trust and knowledge through well designed customer-industry engagement activities. The Bayesian network was implemented via a spreadsheet with a tick box interface. The model combined available data from industry-specific and public sources with relevant expert opinion. The results revealed that the most effective intervention strategies involve combining particular CMOs with associated education and engagement activities. The model demonstrated the importance of designing interventions that take into account the interactions of the various elements of the socio-technical system. The options that provided the greatest impact on peak demand were Off-Peak Tariffs and Managed Supply and increases in the price of electricity. The impact in peak demand reduction differed for each of the locations and highlighted that household numbers, demographics as well as the different climates were significant factors. It presented possible network peak demand reductions which would delay any upgrade of networks, resulting in savings for Queensland utilities and ultimately for households. The use of this systems approach using Bayesian networks to assist the management of peak demand in different modelled locations in Queensland provided insights about the most important elements in the system and the intervention strategies that could be tailored to the targeted customer segments.
Resumo:
Systems level modelling and simulations of biological processes are proving to be invaluable in obtaining a quantitative and dynamic perspective of various aspects of cellular function. In particular, constraint-based analyses of metabolic networks have gained considerable popularity for simulating cellular metabolism, of which flux balance analysis (FBA), is most widely used. Unlike mechanistic simulations that depend on accurate kinetic data, which are scarcely available, FBA is based on the principle of conservation of mass in a network, which utilizes the stoichiometric matrix and a biologically relevant objective function to identify optimal reaction flux distributions. FBA has been used to analyse genome-scale reconstructions of several organisms; it has also been used to analyse the effect of perturbations, such as gene deletions or drug inhibitions in silico. This article reviews the usefulness of FBA as a tool for gaining biological insights, advances in methodology enabling integration of regulatory information and thermodynamic constraints, and finally addresses the challenges that lie ahead. Various use scenarios and biological insights obtained from FBA, and applications in fields such metabolic engineering and drug target identification, are also discussed. Genome-scale constraint-based models have an immense potential for building and testing hypotheses, as well as to guide experimentation.
Resumo:
This project is part of the Northern Grazing Systems (NGS) projects which aim to increase adoption of innovative best-practice grazing management by beef producers throughout Queensland, Northern Territory and the Kimberley and Pilbara regions of Western Australia.