908 resultados para Methods and systems of culture. Cropping systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characterization of drought environment types (ETs) has proven useful for breeding crops for drought-prone regions. Here we consider how changes in climate and atmospheric carbon dioxide (CO2) concentrations will affect drought ET frequencies in sorghum and wheat systems of Northeast Australia. We also modify APSIM (the Agricultural Production Systems Simulator) to incorporate extreme heat effects on grain number and weight, and then evaluate changes in the occurrence of heat-induced yield losses of more than 10%, as well as the co-occurrence of drought and heat. More than six million simulations spanning representative locations, soil types, management systems, and 33 climate projections led to three key findings. First, the projected frequency of drought decreased slightly for most climate projections for both sorghum and wheat, but for different reasons. In sorghum, warming exacerbated drought stresses by raising the atmospheric vapor pressure deficit and reducing transpiration efficiency (TE), but an increase in TE due to elevated CO2 more than offset these effects. In wheat, warming reduced drought stress during spring by hastening development through winter and reducing exposure to terminal drought. Elevated CO2 increased TE but also raised radiation use efficiency and overall growth rates and water use, thereby offsetting much of the drought reduction from warming. Second, adding explicit effects of heat on grain number and grain size often switched projected yield impacts from positive to negative. Finally, although average yield losses associated with drought will remain generally higher than for heat stress for the next half century, the relative importance of heat is steadily growing. This trend, as well as the likely high degree of genetic variability in heat tolerance, suggests that more emphasis on heat tolerance is warranted in breeding programs. At the same time, work on drought tolerance should continue with an emphasis on drought that co-occurs with extreme heat. This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maize grown in eastern and southern Africa experiences random occurrences of drought. This uncertainty creates difficulty in developing superior varieties and their agronomy. Characterisation of drought types and their frequencies could help in better defining selection environments for improving resistance to drought. We used the well tested APSIM maize model to characterise major drought stress patterns and their frequencies across six countries of the region including Ethiopia, Kenya, Tanzania, Malawi, Mozambique and Zimbabwe. The database thus generated covered 35 sites, 17 to 86 years of daily climate records, 3 varieties and 3 planting densities from a total of 11,174 simulations. The analysis identified four major drought environment types including those characterised by low-stress which occurred in 42% of the years, mid-season drought occurring in 15% of the years, late-terminal stress which occurred in 22% of the years and early-terminal drought occurring in 21% of the years. These frequencies varied in relation to sites, genotypes and management. The simulations showed that early terminal stress could result in a yield reduction of 70% compared with low-stress environmental types. The study presents the importance of environmental characterization in contributing to maize improvement in eastern and southern Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Temperatures have increased and in-crop rainfall decreased over recent decades in many parts of the Australian wheat cropping region. With these trends set to continue or intensify, improving crop adaptation in the face of climate change is particularly urgent in this, already drought-prone, cropping region. Importantly, improved performance under water-limitation must be achieved while retaining yield potential during more favourable seasons. A multi-trait-based approach to improve wheat yield and yield stability in the face of water-limitation and heat has been instigated in northern Australia using novel phenotyping techniques and a nested association mapping (NAM) approach. An innovative laboratory technique allows rapid root trait screening of hundreds of lines. Using soil grown seedlings, the method offers significant advantages over many other lab-based techniques. Another recently developed method allows novel stay-green traits to be quantified objectively for hundreds of genotypes in standard field trial plots. Field trials in multiple locations and seasons allow evaluation of targeted trait values and identification of superior germplasm. Traits, including yield and yield components are measured for hundreds of NAM lines in rain fed environments under various levels of water-limitation. To rapidly generate lines of interest, the University of Queensland “speed breeding” method is being employed, allowing up to 7 plant generations per annum. A NAM population of over 1000 wheat recombinant inbred lines has been progressed to the F5 generation within 18 months. Genotyping the NAM lines with the genome-wide DArTseq molecular marker system provides up to 40,000 markers. They are now being used for association mapping to validate QTL previously identified in bi-parental populations and to identify novel QTL for stay-green and root traits. We believe that combining the latest techniques in physiology, phenotyping, genetics and breeding will increase genetic progress toward improved adaptation to water-limited environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of electroacoustic analogies suggests that a source of acoustical energy (such as an engine, compressor, blower, turbine, loudspeaker, etc.) can be characterized by an acoustic source pressure ps and internal source impedance Zs, analogous to the open-circuit voltage and internal impedance of an electrical source. The present paper shows analytically that the source characteristics evaluated by means of the indirect methods are independent of the loads selected; that is, the evaluated values of ps and Zs are unique, and that the results of the different methods (including the direct method) are identical. In addition, general relations have been derived here for the transfer of source characteristics from one station to another station across one or more acoustical elements, and also for combining several sources into a single equivalent source. Finally, all the conclusions are extended to the case of a uniformly moving medium, incorporating the convective as well as dissipative effects of the mean flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human activities extract and displace different substances and materials from the earth s crust, thus causing various environmental problems, such as climate change, acidification and eutrophication. As problems have become more complicated, more holistic measures that consider the origins and sources of pollutants have been called for. Industrial ecology is a field of science that forms a comprehensive framework for studying the interactions between the modern technological society and the environment. Industrial ecology considers humans and their technologies to be part of the natural environment, not separate from it. Industrial operations form natural systems that must also function as such within the constraints set by the biosphere. Industrial symbiosis (IS) is a central concept of industrial ecology. Industrial symbiosis studies look at the physical flows of materials and energy in local industrial systems. In an ideal IS, waste material and energy are exchanged by the actors of the system, thereby reducing the consumption of virgin material and energy inputs and the generation of waste and emissions. Companies are seen as part of the chains of suppliers and consumers that resemble those of natural ecosystems. The aim of this study was to analyse the environmental performance of an industrial symbiosis based on pulp and paper production, taking into account life cycle impacts as well. Life Cycle Assessment (LCA) is a tool for quantitatively and systematically evaluating the environmental aspects of a product, technology or service throughout its whole life cycle. Moreover, the Natural Step Sustainability Principles formed a conceptual framework for assessing the environmental performance of the case study symbiosis (Paper I). The environmental performance of the case study symbiosis was compared to four counterfactual reference scenarios in which the actors of the symbiosis operated on their own. The research methods used were process-based life cycle assessment (LCA) (Papers II and III) and hybrid LCA, which combines both process and input-output LCA (Paper IV). The results showed that the environmental impacts caused by the extraction and processing of the materials and the energy used by the symbiosis were considerable. If only the direct emissions and resource use of the symbiosis had been considered, less than half of the total environmental impacts of the system would have been taken into account. When the results were compared with the counterfactual reference scenarios, the net environmental impacts of the symbiosis were smaller than those of the reference scenarios. The reduction in environmental impacts was mainly due to changes in the way energy was produced. However, the results are sensitive to the way the reference scenarios are defined. LCA is a useful tool for assessing the overall environmental performance of industrial symbioses. It is recommended that in addition to the direct effects, the upstream impacts should be taken into account as well when assessing the environmental performance of industrial symbioses. Industrial symbiosis should be seen as part of the process of improving the environmental performance of a system. In some cases, it may be more efficient, from an environmental point of view, to focus on supply chain management instead.  

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of electroacoustic analogies suggests that a source of acoustical energy (such as an engine, compressor, blower, turbine, loudspeaker, etc.) can be characterized by an acoustic source pressure ps and internal source impedance Zs, analogous to the open-circuit voltage and internal impedance of an electrical source. The present paper shows analytically that the source characteristics evaluated by means of the indirect methods are independent of the loads selected; that is, the evaluated values of ps and Zs are unique, and that the results of the different methods (including the direct method) are identical. In addition, general relations have been derived here for the transfer of source characteristics from one station to another station across one or more acoustical elements, and also for combining several sources into a single equivalent source. Finally, all the conclusions are extended to the case of a uniformly moving medium, incorporating the convective as well as dissipative effects of the mean flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computation technology has dramatically changed the world around us; you can hardly find an area where cell phones have not saturated the market, yet there is a significant lack of breakthroughs in the development to integrate the computer with biological environments. This is largely the result of the incompatibility of the materials used in both environments; biological environments and experiments tend to need aqueous environments. To help aid in these development chemists, engineers, physicists and biologists have begun to develop microfluidics to help bridge this divide. Unfortunately, the microfluidic devices required large external support equipment to run the device. This thesis presents a series of several microfluidic methods that can help integrate engineering and biology by exploiting nanotechnology to help push the field of microfluidics back to its intended purpose, small integrated biological and electrical devices. I demonstrate this goal by developing different methods and devices to (1) separate membrane bound proteins with the use of microfluidics, (2) use optical technology to make fiber optic cables into protein sensors, (3) generate new fluidic devices using semiconductor material to manipulate single cells, and (4) develop a new genetic microfluidic based diagnostic assay that works with current PCR methodology to provide faster and cheaper results. All of these methods and systems can be used as components to build a self-contained biomedical device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Change in thermal conditions can substantially affect crop growth, cropping systems, agricultural production and land use. In the present study, we used annual accumulated temperatures > 10 degrees C (AAT10) as an indicator to investigate the spatio-temporal changes in thermal conditions across China from the late 1980s to 2000, with a spatial resolution of 1 x 1 km. We also investigated the effects of the spatio-temporal changes on cultivated land use and cropping systems. We found that AAT10 has increased on a national scale since the late 1980s, Particularly, 3.16 x 10(5) km(2) of land moved from the spring wheat zone (AAT10: 1600 to 3400 degrees C) to the winter wheat zone (AAT10: 3400 to 4500 degrees C). Changes in thermal conditions had large influences on cultivated land area and cropping systems. The areas of cultivated land have increased in regions with increasing AAT10, and the cropping rotation index has increased since the late 1980s. Single cropping was replaced by 3 crops in 2 years in many regions, and areas of winter wheat cultivation were shifted northward in some areas, such as in the eastern Inner Mongolia Autonomous Region and in western Liaoning and Jilin Provinces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.