44 resultados para THIRD GENERATION SYSTEMS
Resumo:
India is increasingly investing in renewable technology to meet rising energy demands, with hydropower and other renewables comprising one-third of current installed capacity. Installed wind-power is projected to increase 5-fold by 2035 (to nearly 100GW) under the International Energy Agency’s New Policies scenario. However, renewable electricity generation is dependent upon the prevailing meteorology, which is strongly influenced by monsoon variability. Prosperity and widespread electrification are increasing the demand for air conditioning, especially during the warm summer. This study uses multi-decadal observations and meteorological reanalysis data to assess the impact of intraseasonal monsoon variability on the balance of electricity supply from wind-power and temperature-related demand in India. Active monsoon phases are characterised by vigorous convection and heavy rainfall over central India. This results in lower temperatures giving lower cooling energy demand, while strong westerly winds yield high wind-power output. In contrast, monsoon breaks are characterised by suppressed precipitation, with higher temperatures and hence greater demand for cooling, and lower wind-power output across much of India. The opposing relationship between wind-power supply and cooling demand during active phases (low demand, high supply) and breaks (high demand, low supply) suggests that monsoon variability will tend to exacerbate fluctuations in the so-called demand-net-wind (i.e., electrical demand that must be supplied from non-wind sources). This study may have important implications for the design of power systems and for investment decisions in conventional schedulable generation facilities (such as coal and gas) that are used to maintain the supply/demand balance. In particular, if it is assumed (as is common) that the generated wind-power operates as a price-taker (i.e., wind farm operators always wish to sell their power, irrespective of price) then investors in conventional facilities will face additional weather-volatility through the monsoonal impact on the length and frequency of production periods (i.e. their load-duration curves).
Resumo:
In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.
Resumo:
The extent to which the four-dimensional variational data assimilation (4DVAR) is able to use information about the time evolution of the atmosphere to infer the vertical spatial structure of baroclinic weather systems is investigated. The singular value decomposition (SVD) of the 4DVAR observability matrix is introduced as a novel technique to examine the spatial structure of analysis increments. Specific results are illustrated using 4DVAR analyses and SVD within an idealized 2D Eady model setting. Three different aspects are investigated. The first aspect considers correcting errors that result in normal-mode growth or decay. The results show that 4DVAR performs well at correcting growing errors but not decaying errors. Although it is possible for 4DVAR to correct decaying errors, the assimilation of observations can be detrimental to a forecast because 4DVAR is likely to add growing errors instead of correcting decaying errors. The second aspect shows that the singular values of the observability matrix are a useful tool to identify the optimal spatial and temporal locations for the observations. The results show that the ability to extract the time-evolution information can be maximized by placing the observations far apart in time. The third aspect considers correcting errors that result in nonmodal rapid growth. 4DVAR is able to use the model dynamics to infer some of the vertical structure. However, the specification of the case-dependent background error variances plays a crucial role.
Resumo:
The MarQUEST (Marine Biogeochemistry and Ecosystem Modelling Initiative in QUEST) project was established to develop improved descriptions of marine biogeochemistry, suited for the next generation of Earth system models. We review progress in these areas providing insight on the advances that have been made as well as identifying remaining key outstanding gaps for the development of the marine component of next generation Earth system models. The following issues are discussed and where appropriate results are presented; the choice of model structure, scaling processes from physiology to functional types, the ecosystem model sensitivity to changes in the physical environment, the role of the coastal ocean and new methods for the evaluation and comparison of ecosystem and biogeochemistry models. We make recommendations as to where future investment in marine ecosystem modelling should be focused, highlighting a generic software framework for model development, improved hydrodynamic models, and better parameterisation of new and existing models, reanalysis tools and ensemble simulations. The final challenge is to ensure that experimental/observational scientists are stakeholders in the models and vice versa.
Resumo:
Farming systems research is a multi-disciplinary holistic approach to solve the problems of small farms. Small and marginal farmers are the core of the Indian rural economy Constituting 0.80 of the total farming community but possessing only 0.36 of the total operational land. The declining trend of per capita land availability poses a serious challenge to the sustainability and profitability of farming. Under such conditions, it is appropriate to integrate land-based enterprises such as dairy, fishery, poultry, duckery, apiary, field and horticultural cropping within the farm, with the objective of generating adequate income and employment for these small and marginal farmers Under a set of farm constraints and varying levels of resource availability and Opportunity. The integration of different farm enterprises can be achieved with the help of a linear programming model. For the current review, integrated farming systems models were developed, by Way Of illustration, for the marginal, small, medium and large farms of eastern India using linear programming. Risk analyses were carried out for different levels of income and enterprise combinations. The fishery enterprise was shown to be less risk-prone whereas the crop enterprise involved greater risk. In general, the degree of risk increased with the increasing level of income. With increase in farm income and risk level, the resource use efficiency increased. Medium and large farms proved to be more profitable than small and marginal farms with higher level of resource use efficiency and return per Indian rupee (Rs) invested. Among the different enterprises of integrated farming systems, a chain of interaction and resource flow was observed. In order to make fanning profitable and improve resource use efficiency at the farm level, the synergy among interacting components of farming systems should be exploited. In the process of technology generation, transfer and other developmental efforts at the farm level (contrary to the discipline and commodity-based approaches which have a tendency to be piecemeal and in isolation), it is desirable to place a whole-farm scenario before the farmers to enhance their farm income, thereby motivating them towards more efficient and sustainable fanning.
Resumo:
Importance measures in reliability engineering are used to identify weak areas of a system and signify the roles of components in either causing or contributing to proper functioning of the system. Traditional importance measures for multistate systems mainly concern reliability importance of an individual component and seldom consider the utility performance of the systems. This paper extends the joint importance concepts of two components from the binary system case to the multistate system case. A joint structural importance and a joint reliability importance are defined on the basis of the performance utility of the system. The joint structural importance measures the relationship of two components when the reliabilities of components are not available. The joint reliability importance is inferred when the reliabilities of the components are given. The properties of the importance measures are also investigated. A case study for an offshore electrical power generation system is given.
Resumo:
Quantitative control of aroma generation during the Maillard reaction presents great scientific and industrial interest. Although there have been many studies conducted in simplified model systems, the results are difficult to apply to complex food systems, where the presence of other components can have a significant impact. In this work, an aqueous extract of defatted beef liver was chosen as a simplified food matrix for studying the kinetics of the Mallard reaction. Aliquots of the extract were heated under different time and temperature conditions and analyzed for sugars, amino acids, and methylbutanals, which are important Maillard-derived aroma compounds formed in cooked meat. Multiresponse kinetic modeling, based on a simplified mechanistic pathway, gave a good fit with the experimental data, but only when additional steps were introduced to take into account the interactions of glucose and glucose-derived intermediates with protein and other amino compounds. This emphasizes the significant role of the food matrix in controlling the Maillard reaction.
Resumo:
It is demonstrated that distortion of the terahertz beam profile and generation of a cross-polarised component occur when the beam in terahertz time domain spectroscopy and imaging systems interacts with the sample under test. These distortions modify the detected signal, leading to spectral and image artefacts. The degree of distortion depends on the optical design of the system as well as the properties of the sample.
Resumo:
A hybridised and Knowledge-based Evolutionary Algorithm (KEA) is applied to the multi-criterion minimum spanning tree problems. Hybridisation is used across its three phases. In the first phase a deterministic single objective optimization algorithm finds the extreme points of the Pareto front. In the second phase a K-best approach finds the first neighbours of the extreme points, which serve as an elitist parent population to an evolutionary algorithm in the third phase. A knowledge-based mutation operator is applied in each generation to reproduce individuals that are at least as good as the unique parent. The advantages of KEA over previous algorithms include its speed (making it applicable to large real-world problems), its scalability to more than two criteria, and its ability to find both the supported and unsupported optimal solutions.
Resumo:
Techniques for the coherent generation and detection of electromagnetic radiation in the far infrared, or terahertz, region of the electromagnetic spectrum have recently developed rapidly and may soon be applied for in vivo medical imaging. Both continuous wave and pulsed imaging systems are under development, with terahertz pulsed imaging being the more common method. Typically a pump and probe technique is used, with picosecond pulses of terahertz radiation generated from femtosecond infrared laser pulses, using an antenna or nonlinear crystal. After interaction with the subject either by transmission or reflection, coherent detection is achieved when the terahertz beam is combined with the probe laser beam. Raster scanning of the subject leads to an image data set comprising a time series representing the pulse at each pixel. A set of parametric images may be calculated, mapping the values of various parameters calculated from the shape of the pulses. A safety analysis has been performed, based on current guidelines for skin exposure to radiation of wavelengths 2.6 µm–20 mm (15 GHz–115 THz), to determine the maximum permissible exposure (MPE) for such a terahertz imaging system. The international guidelines for this range of wavelengths are drawn from two U.S. standards documents. The method for this analysis was taken from the American National Standard for the Safe Use of Lasers (ANSI Z136.1), and to ensure a conservative analysis, parameters were drawn from both this standard and from the IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields (C95.1). The calculated maximum permissible average beam power was 3 mW, indicating that typical terahertz imaging systems are safe according to the current guidelines. Further developments may however result in systems that will exceed the calculated limit. Furthermore, the published MPEs for pulsed exposures are based on measurements at shorter wavelengths and with pulses of longer duration than those used in terahertz pulsed imaging systems, so the results should be treated with caution.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.
Resumo:
Air distribution systems are one of the major electrical energy consumers in air-conditioned commercial buildings which maintain comfortable indoor thermal environment and air quality by supplying specified amounts of treated air into different zones. The sizes of air distribution lines affect energy efficiency of the distribution systems. Equal friction and static regain are two well-known approaches for sizing the air distribution lines. Concerns to life cycle cost of the air distribution systems, T and IPS methods have been developed. Hitherto, all these methods are based on static design conditions. Therefore, dynamic performance of the system has not been yet addressed; whereas, the air distribution systems are mostly performed in dynamic rather than static conditions. Besides, none of the existing methods consider any aspects of thermal comfort and environmental impacts. This study attempts to investigate the existing methods for sizing of the air distribution systems and proposes a dynamic approach for size optimisation of the air distribution lines by taking into account optimisation criteria such as economic aspects, environmental impacts and technical performance. These criteria have been respectively addressed through whole life costing analysis, life cycle assessment and deviation from set-point temperature of different zones. Integration of these criteria into the TRNSYS software produces a novel dynamic optimisation approach for duct sizing. Due to the integration of different criteria into a well- known performance evaluation software, this approach could be easily adopted by designers in busy nature of design. Comparison of this integrated approach with the existing methods reveals that under the defined criteria, system performance is improved up to 15% compared to the existing methods. This approach is interpreted as a significant step forward reaching to the net zero emission building in future.