962 resultados para Hierarchical systems
Resumo:
An approach is presented for hierarchical control of an ammonia reactor, which is a key unit process in a nitrogen fertilizer complex. The aim of the control system is to ensure safe operation of the reactor around the optimal operating point in the face of process variable disturbances and parameter variations. The four different layers perform the functions of regulation, optimization, adaptation, and self-organization. The simulation for this proposed application is conducted on an AD511 hybrid computer in which the AD5 analog processor is used to represent the process and the PDP-11/ 35 digital computer is used for the implementation of control laws. Simulation results relating to the different layers have been presented.
Resumo:
Ramp metering (RM) is an access control for motorways, in which a traffic signal is placed at on-ramps to regulate the rate of vehicles entering the motorway and thus to preserve the motorway capacity. In general, RM algorithms fall into two categories by their effective scope: local control and coordinated control. Local control algorithm determines the metering rate based on the traffic condition on adjacent motorway mainline and the on-ramp. Conversely, coordinated RM strategies make use of measurements from the entire motorway network to operate individual ramp signals for optimal performance at the network level. This study proposes a multi-hierarchical strategy for on-ramp coordination. The strategy is structured in two layers. At the higher layer, a centralised, predictive controller plans the coordination control within a long update interval based on the location of high-risk breakdown flow. At the lower layer, reactive controllers determine the metering rates of those ramps involved in the ramp coordination with a short update interval. This strategy is modelled and applied to the northbound model of the Pacific Motorway in a micro-simulation platform (AIMSUN). The simulation results show that the proposed strategy effectively delays the onset of congestion and reduces total congestion with better managed on-ramp queues.
Resumo:
A learning automaton operating in a random environment updates its action probabilities on the basis of the reactions of the environment, so that asymptotically it chooses the optimal action. When the number of actions is large the automaton becomes slow because there are too many updatings to be made at each instant. A hierarchical system of such automata with assured c-optimality is suggested to overcome that problem.The learning algorithm for the hierarchical system turns out to be a simple modification of the absolutely expedient algorithm known in the literature. The parameters of the algorithm at each level in the hierarchy depend only on the parameters and the action probabilities of the previous level. It follows that to minimize the number of updatings per cycle each automaton in the hierarchy need have only two or three actions.
Resumo:
This research examined the implementation of clinical information system technology in a large Saudi Arabian health care organisation. The research was underpinned by symbolic interactionism and grounded theory methods informed data collection and analysis. Observations, a review of policy documents and 38 interviews with registered nurses produced in-depth data. Analysis generated three abstracted concepts that explained how imported technology increased practice and health care complexity rather than enhance quality patient care. The core category, Disseminating Change, also depicted a hierarchical and patriarchal culture that shaped the implementation process at the levels of government, organisation and the individual.
Resumo:
An algorithm is described for developing a hierarchy among a set of elements having certain precedence relations. This algorithm, which is based on tracing a path through the graph, is easily implemented by a computer.
Resumo:
An algorithm is described for developing a hierarchy among a set of elements having certain precedence relations. This algorithm, which is based on tracing a path through the graph, is easily implemented by a computer.
Resumo:
Network Interfaces (NIs) are used in Multiprocessor System-on-Chips (MPSoCs) to connect CPUs to a packet switched Network-on-Chip. In this work we introduce a new NI architecture for our hierarchical CoreVA-MPSoC. The CoreVA-MPSoC targets streaming applications in embedded systems. The main contribution of this paper is a system-level analysis of different NI configurations, considering both software and hardware costs for NoC communication. Different configurations of the NI are compared using a benchmark suite of 10 streaming applications. The best performing NI configuration shows an average speedup of 20 for a CoreVA-MPSoC with 32 CPUs compared to a single CPU. Furthermore, we present physical implementation results using a 28 nm FD-SOI standard cell technology. A hierarchical MPSoC with 8 CPU clusters and 4 CPUs in each cluster running at 800MHz requires an area of 4.56mm2.
Resumo:
In this paper, the control aspects of a hierarchical organization under the influence of "proportionality" policies are analyzed. Proportionality policies are those that restrict the recruitment to every level of the hierarchy (except the bottom most level or base level) to be in strict proportion to the promotions into that level. Both long term and short term control analysis have been discussed. In long term control the specific roles of the parameters of the system with regard to control of the shape and size of the system have been analyzed and yield suitable control strategies. In short term control, the attainability of a target or goal structure within a specific time from a given initial structure has been analyzed and yields the required recruitment strategies. The theoretical analyses have been illustrated with computational examples and also with real world data.
Resumo:
In this paper, the control aspects of a hierarchical organization under the influence of "proportionality" policies are analyzed. Proportionality policies are those that restrict the recruitment to every level of the hierarchy (except the bottom most level or base level) to be in strict proportion to the promotions into that level. Both long term and short term control analysis have been discussed. In long term control the specific roles of the parameters of the system with regard to control of the shape and size of the system have been analyzed and yield suitable control strategies. In short term control, the attainability of a target or goal structure within a specific time from a given initial structure has been analyzed and yields the required recruitment strategies. The theoretical analyses have been illustrated with computational examples and also with real world data. The control of such proportionality systems is then compared with that of the general systems (which do not follow such policies) with some significant conclusions. The control relations of such proportionality systems are found to be simpler and more practically feasible than those of general Markov systems, which do not have such restrictions. Such proportionality systems thus not only retain and match the flexibility of general Markov systems but also have the added advantage of simpler and more practically feasible controls. The proportionality policies hence act as an alternative and more practicably feasible means of control. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
A simple ball-drop impact tester is developed for studying the dynamic response of hierarchical, complex, small-sized systems and materials. The developed algorithm and set-up have provisions for applying programmable potential difference along the height of a test specimen during an impact loading; this enables us to conduct experiments on various materials and smart structures whose mechanical behavior is sensitive to electric field. The software-hardware system allows not only acquisition of dynamic force-time data at very fast sampling rate (up to 2 x 10(6) samples/s), but also application of a pre-set potential difference (up to +/- 10 V) across a test specimen for a duration determined by feedback from the force-time data. We illustrate the functioning of the set-up by studying the effect of electric field on the energy absorption capability of carbon nanotube foams of 5 x 5 x 1.2 mm(3) size under impact conditions. (C) 2014 AIP Publishing LLC.
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
We show that a film of a suspension of polymer grafted nanoparticles on a liquid substrate can be employed to create two-dimensional nanostructures with a remarkable variation in the pattern length scales. The presented experiments also reveal the emergence of concentration-dependent bimodal patterns as well as re-entrant behaviour that involves length scales due to dewetting and compositional instabilities. The experimental observations are explained through a gradient dynamics model consisting of coupled evolution equations for the height of the suspension film and the concentration of polymer. Using a Flory-Huggins free energy functional for the polymer solution, we show in a linear stability analysis that the thin film undergoes dewetting and/or compositional instabilities depending on the concentration of the polymer in the solution. We argue that the formation via `hierarchical self-assembly' of various functional nanostructures observed in different systems can be explained as resulting from such an interplay of instabilities.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.