964 resultados para Control-flow Analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bagasse stockpile operations have the potential to lead to adverse environmental and social impacts. Dust releases can cause occupational health and safety concerns for factory workers and dust emissions impact on the surrounding community. Preliminary modelling showed that bagasse depithing would likely reduce the environmental risks, particularly dust emissions, associated with large-scale bagasse stockpiling operations. Dust emission properties were measured and used for dispersion modelling with favourable outcomes. Modelling showed a 70% reduction in peak ground level concentrations of PM10 dust (particles with an aerodynamic diameter less than 10 μm) from operations on depithed bagasse stockpiles compared to similar operations on stockpiles of whole bagasse. However, the costs of a depithing operation at a sugar factory were estimated to be approximately $2.1 million in capital expenditure to process 100 000 t/y of bagasse and operating costs were 200 000 p.a. The total capital cost for a 10 000 t/y operation was approximately $1.6 million. The cost of depithing based on a discounted cash flow analysis was $5.50 per tonne of bagasse for the 100 000 t/y scenario. This may make depithing prohibitively expensive in many situations if installed exclusively as a dust control measure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Process models define allowed process execution scenarios. The models are usually depicted as directed graphs, with gateway nodes regulating the control flow routing logic and with edges specifying the execution order constraints between tasks. While arbitrarily structured control flow patterns in process models complicate model analysis, they also permit creativity and full expressiveness when capturing non-trivial process scenarios. This paper gives a classification of arbitrarily structured process models based on the hierarchical process model decomposition technique. We identify a structural class of models consisting of block structured patterns which, when combined, define complex execution scenarios spanning across the individual patterns. We show that complex behavior can be localized by examining structural relations of loops in hidden unstructured regions of control flow. The correctness of the behavior of process models within these regions can be validated in linear time. These observations allow us to suggest techniques for transforming hidden unstructured regions into block-structured ones.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this note, we present a method to characterize the degradation in performance that arises in linear systems due to constraints imposed on the magnitude of the control signal to avoid saturation effects. We do this in the context of cheap control for tracking step signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Weedy Sporobolus grasses have low palatability for livestock, with infestations reducing land condition and pastoral productivity. Control and containment options are available, but the cost of weed control is high relative to the extra return from livestock, thus, limiting private investment. This paper outlines a process for analysing the economic consequences of alternative management options for weedy Sporobolus grasses. This process is applicable to other weeds and other pastoral degradation or development issues. Using a case study property, three scenarios were developed. Each scenario compared two alternative management options and was analysed using discounted cash flow analysis. Two of the scenarios were based on infested properties and one scenario was based on a currently uninfested property but highly likely to become infested without active containment measures preventing weed seed transport and seedling establishment. The analysis highlighted why particular weedy Sporobolus grass management options may not be financially feasible for the landholder with the infestation. However, at the regional scale, the management options may be highly worthwhile due to a reduction in weed seed movement and new weed invasions. Therefore, to encourage investment by landholders in weedy Sporobolus grass management the investment of public money on behalf of landholders with non-infested properties should be considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an approach for the analysis and design of 765kV/400kV EHV transmission system which is a typical expansion in Indian power grid system, based on the analysis of steady state and transient over voltages. The approach for transmission system design is iterative in nature. The first step involves exhaustive power flow analysis, based on constraints such as right of way, power to be transmitted, power transfer capabilities of lines, existing interconnecting transformer capabilities etc. Acceptable bus voltage profiles and satisfactory equipment loadings during all foreseeable operating conditions for normal and contingency operation are the guiding criteria. Critical operating strategies are also evolved in this initial design phase. With the steady state over voltages obtained, comprehensive dynamic and transient studies are to be carried out including switching over voltages studies. This paper presents steady state and switching transient studies for alternative two typical configurations of 765kV/400 kV systems and the results are compared. Transient studies are carried out to obtain the peak values of 765 kV transmission systems and are compared with the alternative configurations of existing 400 kV systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Compiler optimizations need precise and scalable analyses to discover program properties. We propose a partially flow-sensitive framework that tries to draw on the scalability of flow-insensitive algorithms while providing more precision at some specific program points. Provided with a set of critical nodes — basic blocks at which more precise information is desired — our partially flow-sensitive algorithm computes a reduced control-flow graph by collapsing some sets of non-critical nodes. The algorithm is more scalable than a fully flow-sensitive one as, assuming that the number of critical nodes is small, the reduced flow-graph is much smaller than the original flow-graph. At the same time, a much more precise information is obtained at certain program points than would had been obtained from a flow-insensitive algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fast Decoupled Load Flow (FDLF) is a very popular and widely used power flow analysis method because of its simplicity and efficiency. Even though the basic FDLF algorithm is well investigated, the same is not true in the case of additional schemes/modifications required to obtain adjusted load flow solutions using the FDLF method. Handling generator Q limits is one such important feature needed in any practical load flow method. This paper presents a comprehensive investigation of two classes of schemes intended to handle this aspect i.e. the bus type switching scheme and the sensitivity scheme. We propose two new sensitivity based schemes and assess their performance in comparison with the existing schemes. In addition, a new scheme to avoid the possibility of anomalous solutions encountered while using the conventional schemes is also proposed and evaluated. Results from extensive simulation studies are provided to highlight the strengths and weaknesses of these existing and proposed schemes, especially from the point of view of reliability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on the financial analysis involved in setting up of fish farming on a small-scale in a homestead. About 0.5 acres of land was used for the construction of pond which as a stock of Clarias spp/ Heterobranchus spp and Tilapia spp at the ratio of one to three for a period of 12 months. The land/land development cost is N26,500.00, pond construction cost, N35,700.00, equipment cost, N2,650.00 and stock/Input requirement cost N155,727.00 while the revenue from sales is N376,000.00. A cash flow analysis is also calculated for the fish farm, which is N155,423.00 for first year cash flow, and appropriate profit/mosses were calculated for five-year production cycle of N1,036,515.00 million. At the end appreciable profit is realized from the enterprises. This type of enterprises is viable for small-scale farmers to practices and adopted for financial support for their family

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with the experimental evaluation of a flow analysis system based on the integration between an under-resolved Navier-Stokes simulation and experimental measurements with the mechanism of feedback (referred to as Measurement-Integrated simulation), applied to the case of a planar turbulent co-flowing jet. The experiments are performed with inner-to-outer-jet velocity ratio around 2 and the Reynolds number based on the inner-jet heights about 10000. The measurement system is a high-speed PIV, which provides time-resolved data of the flow-field, on a field of view which extends to 20 jet heights downstream the jet outlet. The experimental data can thus be used both for providing the feedback data for the simulations and for validation of the MI-simulations over a wide region. The effect of reduced data-rate and spatial extent of the feedback (i.e. measurements are not available at each simulation time-step or discretization point) was investigated. At first simulations were run with full information in order to obtain an upper limit of the MI-simulations performance. The results show the potential of this methodology of reproducing first and second order statistics of the turbulent flow with good accuracy. Then, to deal with the reduced data different feedback strategies were tested. It was found that for small data-rate reduction the results are basically equivalent to the case of full-information feedback but as the feedback data-rate is reduced further the error increases and tend to be localized in regions of high turbulent activity. Moreover, it is found that the spatial distribution of the error looks qualitatively different for different feedback strategies. Feedback gain distributions calculated by optimal control theory are presented and proposed as a mean to make it possible to perform MI-simulations based on localized measurements only. So far, we have not been able to low error between measurements and simulations by using these gain distributions.