933 resultados para THIRD GENERATION SYSTEMS
Resumo:
The MarQUEST (Marine Biogeochemistry and Ecosystem Modelling Initiative in QUEST) project was established to develop improved descriptions of marine biogeochemistry, suited for the next generation of Earth system models. We review progress in these areas providing insight on the advances that have been made as well as identifying remaining key outstanding gaps for the development of the marine component of next generation Earth system models. The following issues are discussed and where appropriate results are presented; the choice of model structure, scaling processes from physiology to functional types, the ecosystem model sensitivity to changes in the physical environment, the role of the coastal ocean and new methods for the evaluation and comparison of ecosystem and biogeochemistry models. We make recommendations as to where future investment in marine ecosystem modelling should be focused, highlighting a generic software framework for model development, improved hydrodynamic models, and better parameterisation of new and existing models, reanalysis tools and ensemble simulations. The final challenge is to ensure that experimental/observational scientists are stakeholders in the models and vice versa.
Resumo:
Farming systems research is a multi-disciplinary holistic approach to solve the problems of small farms. Small and marginal farmers are the core of the Indian rural economy Constituting 0.80 of the total farming community but possessing only 0.36 of the total operational land. The declining trend of per capita land availability poses a serious challenge to the sustainability and profitability of farming. Under such conditions, it is appropriate to integrate land-based enterprises such as dairy, fishery, poultry, duckery, apiary, field and horticultural cropping within the farm, with the objective of generating adequate income and employment for these small and marginal farmers Under a set of farm constraints and varying levels of resource availability and Opportunity. The integration of different farm enterprises can be achieved with the help of a linear programming model. For the current review, integrated farming systems models were developed, by Way Of illustration, for the marginal, small, medium and large farms of eastern India using linear programming. Risk analyses were carried out for different levels of income and enterprise combinations. The fishery enterprise was shown to be less risk-prone whereas the crop enterprise involved greater risk. In general, the degree of risk increased with the increasing level of income. With increase in farm income and risk level, the resource use efficiency increased. Medium and large farms proved to be more profitable than small and marginal farms with higher level of resource use efficiency and return per Indian rupee (Rs) invested. Among the different enterprises of integrated farming systems, a chain of interaction and resource flow was observed. In order to make fanning profitable and improve resource use efficiency at the farm level, the synergy among interacting components of farming systems should be exploited. In the process of technology generation, transfer and other developmental efforts at the farm level (contrary to the discipline and commodity-based approaches which have a tendency to be piecemeal and in isolation), it is desirable to place a whole-farm scenario before the farmers to enhance their farm income, thereby motivating them towards more efficient and sustainable fanning.
Resumo:
Importance measures in reliability engineering are used to identify weak areas of a system and signify the roles of components in either causing or contributing to proper functioning of the system. Traditional importance measures for multistate systems mainly concern reliability importance of an individual component and seldom consider the utility performance of the systems. This paper extends the joint importance concepts of two components from the binary system case to the multistate system case. A joint structural importance and a joint reliability importance are defined on the basis of the performance utility of the system. The joint structural importance measures the relationship of two components when the reliabilities of components are not available. The joint reliability importance is inferred when the reliabilities of the components are given. The properties of the importance measures are also investigated. A case study for an offshore electrical power generation system is given.
Resumo:
Quantitative control of aroma generation during the Maillard reaction presents great scientific and industrial interest. Although there have been many studies conducted in simplified model systems, the results are difficult to apply to complex food systems, where the presence of other components can have a significant impact. In this work, an aqueous extract of defatted beef liver was chosen as a simplified food matrix for studying the kinetics of the Mallard reaction. Aliquots of the extract were heated under different time and temperature conditions and analyzed for sugars, amino acids, and methylbutanals, which are important Maillard-derived aroma compounds formed in cooked meat. Multiresponse kinetic modeling, based on a simplified mechanistic pathway, gave a good fit with the experimental data, but only when additional steps were introduced to take into account the interactions of glucose and glucose-derived intermediates with protein and other amino compounds. This emphasizes the significant role of the food matrix in controlling the Maillard reaction.
Resumo:
It is demonstrated that distortion of the terahertz beam profile and generation of a cross-polarised component occur when the beam in terahertz time domain spectroscopy and imaging systems interacts with the sample under test. These distortions modify the detected signal, leading to spectral and image artefacts. The degree of distortion depends on the optical design of the system as well as the properties of the sample.
Resumo:
A hybridised and Knowledge-based Evolutionary Algorithm (KEA) is applied to the multi-criterion minimum spanning tree problems. Hybridisation is used across its three phases. In the first phase a deterministic single objective optimization algorithm finds the extreme points of the Pareto front. In the second phase a K-best approach finds the first neighbours of the extreme points, which serve as an elitist parent population to an evolutionary algorithm in the third phase. A knowledge-based mutation operator is applied in each generation to reproduce individuals that are at least as good as the unique parent. The advantages of KEA over previous algorithms include its speed (making it applicable to large real-world problems), its scalability to more than two criteria, and its ability to find both the supported and unsupported optimal solutions.
Resumo:
Techniques for the coherent generation and detection of electromagnetic radiation in the far infrared, or terahertz, region of the electromagnetic spectrum have recently developed rapidly and may soon be applied for in vivo medical imaging. Both continuous wave and pulsed imaging systems are under development, with terahertz pulsed imaging being the more common method. Typically a pump and probe technique is used, with picosecond pulses of terahertz radiation generated from femtosecond infrared laser pulses, using an antenna or nonlinear crystal. After interaction with the subject either by transmission or reflection, coherent detection is achieved when the terahertz beam is combined with the probe laser beam. Raster scanning of the subject leads to an image data set comprising a time series representing the pulse at each pixel. A set of parametric images may be calculated, mapping the values of various parameters calculated from the shape of the pulses. A safety analysis has been performed, based on current guidelines for skin exposure to radiation of wavelengths 2.6 µm–20 mm (15 GHz–115 THz), to determine the maximum permissible exposure (MPE) for such a terahertz imaging system. The international guidelines for this range of wavelengths are drawn from two U.S. standards documents. The method for this analysis was taken from the American National Standard for the Safe Use of Lasers (ANSI Z136.1), and to ensure a conservative analysis, parameters were drawn from both this standard and from the IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields (C95.1). The calculated maximum permissible average beam power was 3 mW, indicating that typical terahertz imaging systems are safe according to the current guidelines. Further developments may however result in systems that will exceed the calculated limit. Furthermore, the published MPEs for pulsed exposures are based on measurements at shorter wavelengths and with pulses of longer duration than those used in terahertz pulsed imaging systems, so the results should be treated with caution.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.
Resumo:
Air distribution systems are one of the major electrical energy consumers in air-conditioned commercial buildings which maintain comfortable indoor thermal environment and air quality by supplying specified amounts of treated air into different zones. The sizes of air distribution lines affect energy efficiency of the distribution systems. Equal friction and static regain are two well-known approaches for sizing the air distribution lines. Concerns to life cycle cost of the air distribution systems, T and IPS methods have been developed. Hitherto, all these methods are based on static design conditions. Therefore, dynamic performance of the system has not been yet addressed; whereas, the air distribution systems are mostly performed in dynamic rather than static conditions. Besides, none of the existing methods consider any aspects of thermal comfort and environmental impacts. This study attempts to investigate the existing methods for sizing of the air distribution systems and proposes a dynamic approach for size optimisation of the air distribution lines by taking into account optimisation criteria such as economic aspects, environmental impacts and technical performance. These criteria have been respectively addressed through whole life costing analysis, life cycle assessment and deviation from set-point temperature of different zones. Integration of these criteria into the TRNSYS software produces a novel dynamic optimisation approach for duct sizing. Due to the integration of different criteria into a well- known performance evaluation software, this approach could be easily adopted by designers in busy nature of design. Comparison of this integrated approach with the existing methods reveals that under the defined criteria, system performance is improved up to 15% compared to the existing methods. This approach is interpreted as a significant step forward reaching to the net zero emission building in future.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
Relating system dynamics to the broad systems movement, the key notion is that reinforcing loops deserve no less attention than balancing loops. Three specific propositions follow. First, since reinforcing loops arise in surprising places, investigations of complex systems must consider their possible existence and potential impact. Second, because the strength of reinforcing loops can be misinferred - we include an example from the field of servomechanisms - computer simulation can be essential. Be it project management, corporate growth or inventory oscillation, simulation helps to assess consequences of reinforcing loops and options for interventions. Third, in social systems the consequences of reinforcing loops are not inevitable. Examples concerning globalization illustrate how difficult it might be to challenge such assumptions. However, system dynamics and ideas from contemporary social theory help to show that even the most complex social systems are, in principle, subject to human influence. In conclusion, by employing these ideas, by attending to reinforcing as well as balancing loops, system dynamics work can improve the understanding of social systems and illuminate our choices when attempting to steer them.
Resumo:
As electricity systems incorporate increasing levels of variable renewable generation, conventional plant will be required to operate more flexibly, with potential impacts for economic viability and reliability. Northern Ireland is pursuing an ambitious target of 40% of electricity to be supplied from renewable sources by 2020. The dominant source of this energy is anticipated to come from inherently variable wind power, one of the most mature renewable technologies. Conventional thermal generators will have a significant role to play in maintaining security of supply. However, running conventional generation more flexibly in order to cater for a wind led regime can reduce its efficiency, as well as shortening its lifespan and increasing O&M costs. This paper examines the impacts of variable operation on existing fossil fuel based generators, with a particular focus on Northern Ireland. Access to plant operators and industry experts has provided insight not currently evident in the energy literature. Characteristics of plant operation and the market framework are identified that present significant challenges in moving to the proposed levels of wind penetration. Opportunities for increasing flexible operation are proposed and future research needs identified.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.