976 resultados para HIERARCHICAL CONTROL
Resumo:
Planner is a formalism for proving theorems and manipulating models in a robot. The formalism is built out of a number of problem-solving primitives together with a hierarchical multiprocess backtrack control structure. Statements can be asserted and perhaps later withdrawn as the state of the world changes. Under BACKTRACK control structure, the hierarchy of activations of functions previously executed is maintained so that it is possible to revert to any previous state. Thus programs can easily manipulate elaborate hypothetical tentative states. In addition PLANNER uses multiprocessing so that there can be multiple loci of changes in state. Goals can be established and dismissed when they are satisfied. The deductive system of PLANNER is subordinate to the hierarchical control structure in order to maintain the desired degree of control. The use of a general-purpose matching language as the basis of the deductive system increases the flexibility of the system. Instead of explicitly naming procedures in calls, procedures can be invoked implicitly by patterns of what the procedure is supposed to accomplish. The language is being applied to solve problems faced by a robot, to write special purpose routines from goal oriented language, to express and prove properties of procedures, to abstract procedures from protocols of their actions, and as a semantic base for English.
Resumo:
Nowadays, communication environments are already characterized by a myriad of competing and complementary technologies that aim to provide an ubiquitous connectivity service. Next Generation Networks need to hide this heterogeneity by providing a new abstraction level, while simultaneously be aware of the underlying technologies to deliver richer service experiences to the end-user. Moreover, the increasing interest for group-based multimedia services followed by their ever growing resource demands and network dynamics, has been boosting the research towards more scalable and exible network control approaches. The work developed in this Thesis enables such abstraction and exploits the prevailing heterogeneity in favor of a context-aware network management and adaptation. In this scope, we introduce a novel hierarchical control framework with self-management capabilities that enables the concept of Abstract Multiparty Trees (AMTs) to ease the control of multiparty content distribution throughout heterogeneous networks. A thorough evaluation of the proposed multiparty transport control framework was performed in the scope of this Thesis, assessing its bene ts in terms of network selection, delivery tree recon guration and resource savings. Moreover, we developed an analytical study to highlight the scalability of the AMT concept as well as its exibility in large scale networks and group sizes. To prove the feasibility and easy deployment characteristic of the proposed control framework, we implemented a proof-of-concept demonstrator that comprehends the main control procedures conceptually introduced. Its outcomes highlight a good performance of the multiparty content distribution tree control, including its local and global recon guration. In order to endow the AMT concept with the ability to guarantee the best service experience by the end-user, we integrate in the control framework two additional QoE enhancement approaches. The rst employs the concept of Network Coding to improve the robustness of the multiparty content delivery, aiming at mitigating the impact of possible packet losses in the end-user service perception. The second approach relies on a machine learning scheme to autonomously determine at each node the expected QoE towards a certain destination. This knowledge is then used by di erent QoE-aware network management schemes that, jointly, maximize the overall users' QoE. The performance and scalability of the control procedures developed, aided by the context and QoE-aware mechanisms, show the advantages of the AMT concept and the proposed hierarchical control strategy for the multiparty content distribution with enhanced service experience. Moreover we also prove the feasibility of the solution in a practical environment, and provide future research directions that bene t the evolved control framework and make it commercially feasible.
Resumo:
A hierarchical fuzzy control scheme is applied to improve vibration suppression by using an electro-mechanical system based on the lever principle. The hierarchical intelligent controller consists of a hierarchical fuzzy supervisor, one fuzzy controller and one robust controller. The supervisor combines controllers output signal to generate the control signal that will be applied on the plant. The objective is to improve the performance of the electromechanical system, considering that the supervisor could take advantage of the different techniques based controllers. The robust controller design is based on a linear mathematical model. Genetic algorithms are used on the fuzzy controller and the supervisor tuning, which are based on non-linear mathematical model. In order to attest the efficiency of the hierarchical fuzzy control scheme, digital simulations were employed. Some comparisons involving the optimized hierarchical controller and the non-optimized hierarchical controller will be made to prove the efficiency of the genetic algorithms and the advantages of its use
Resumo:
Esta dissertação de mestrado apresenta o projeto e a construção de um robô móvel terrestre denominado LOGBOT, com tração de movimento do tipo diferencial – com duas rodas motoras e uma roda livre para manter a estabilidade de sua estrutura em relação à superfície. O controle do robô dispõe dos modos de telemetria e autônomo. No modo de controle por telemetria (ROV), a comunicação do robô com a estação de controle é feita por radiofreqüência a uma distância de até um quilometro em ambientes externos, e até cem metros em ambientes internos. No modo de controle autônomo (AGV), o robô tem habilidade para navegar em ambientes internos e desconhecidos usando sempre a parede à sua esquerda como referência para a trajetória de seu movimento. A seqüência de movimentos para execução da trajetória é enviada para a estação de controle que realiza análises de desempenho do robô. Para executar suas tarefas no modo autônomo, a programação do robô conta com um agente inteligente reativo, que detecta características do ambiente (obstáculos, final de paredes, etc.) e decide sobre qual atitude deve ser executada pelo robô, com objetivo de contornar os obstáculos e controlar a velocidade de suas rodas. Os problemas de erro odométrico e suas correções com base no uso de informações sensoriais externas são devidamente tratados. Técnicas de controle hierárquico do robô como um todo e controle em malha fechada da velocidade das rodas do robô são usadas. Os resultados mostraram que o robô móvel LOGBOT é capaz de navegar, com estabilidade e precisão, em ambientes internos no formato de um corredor (wall following).
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
Traffic control at road junctions is one of the major concerns in most metropolitan cities. Controllers of various approaches are available and the required control action is the effective green-time assigned to each traffic stream within a traffic-light cycle. The application of fuzzy logic provides the controller with the capability to handle uncertain natures of the system, such as drivers’ behaviour and random arrivals of vehicles. When turning traffic is allowed at the junction, the number of phases in the traffic-light cycle increases. The additional input variables inevitably complicate the controller and hence slow down the decision-making process, which is critical in this real-time control problem. In this paper, a hierarchical fuzzy logic controller is proposed to tackle this traffic control problem at a 2-way road junction with turning traffic. The two levels of fuzzy logic controllers devise the minimum effective green-time and fine-tune it respectively at each phase of a traffic-light cycle. The complexity of the controller at each level is reduced with smaller rule-set. The performance of this hierarchical controller is examined by comparison with a fixed-time controller under various traffic conditions. Substantial delay reduction has been achieved as a result and the performance and limitation of the controller will be discussed.
Resumo:
Traffic control at a road junction by a complex fuzzy logic controller is investigated. The increase in the complexity of junction means more number of input variables must be taken into account, which will increase the number of fuzzy rules in the system. A hierarchical fuzzy logic controller is introduced to reduce the number of rules. Besides, the increase in the complexity of the controller makes formulation of the fuzzy rules difficult. A genetic algorithm based off-line leaning algorithm is employed to generate the fuzzy rules. The learning algorithm uses constant flow-rates as training sets. The system is tested by both constant and time-varying flow-rates. Simulation results show that the proposed controller produces lower average delay than a fixed-time controller does under various traffic conditions.
Resumo:
The paper investigates a detailed Active Shock Control Bump Design Optimisation on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 to reduce cruise drag at transonic flow conditions using Evolutionary Algorithms (EAs) coupled to a robust design approach. For the uncertainty design parameters, the positions of boundary layer transition (xtr) and the coefficient of lift (Cl) are considered (250 stochastic samples in total). In this paper, two robust design methods are considered; the first approach uses a standard robust design method, which evaluates one design model at 250 stochastic conditions for uncertainty. The second approach is the combination of a standard robust design method and the concept of hierarchical (multi-population) sampling (250, 50, 15) for uncertainty. Numerical results show that the evolutionary optimization method coupled to uncertainty design techniques produces useful and reliable Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction. In addition,it also shows the benefit of using hierarchical robust method for detailed uncertainty design optimization.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
Diverse morphologies of multidimensional hierarchical single-crystalline ZnO nanoarchitectures including nanoflowers, nanobelts, and nanowires are obtained by use of a simple thermal evaporation and vapour-phase transport deposition technique by placing Au-coated silicon substrates in different positions inside a furnace at process temperatures as low as 550 °C. The nucleation and growth of ZnO nanostructures are governed by the vapour–solid mechanism, as opposed to the commonly reported vapour–liquid–solid mechanism, when gold is used in the process. The morphological, structural, compositional and optical properties of the synthesized ZnO nanostructures can be effectively tailored by means of the experimental parameters, and these properties are closely related to the local growth temperature and gas-phase supersaturation at the sample position. In particular, room-temperature photoluminescence measurements reveal an intense near-band-edge ultraviolet emission at about 386 nm for nanobelts and nanoflowers, which suggests that these nanostructures are of sufficient quality for applications in, for example, optoelectronic devices.
Resumo:
Effective control of dense, high-quality carbon nanotube arrays using hierarchical multilayer catalyst patterns is demonstrated. Scanning/transmission electron microscopy, atomic force microscopy, Raman spectroscopy, and numerical simulations show that by changing the secondary and tertiary layers one can control the properties of the nanotube arrays. The arrays with the highest surface density of vertically aligned nanotubes are produced using a hierarchical stack of iron nanoparticles and alumina and silica layers differing in thickness by one order of magnitude from one another. The results are explained in terms of the catalyst structure effect on carbon diffusivity.
Resumo:
The possibility to control the electric resistivity-temperature dependence of the nanosized resistive components made using hierarchical multilevel arrays of self-assembled gold nanoparticles prepared by multiple deposition/annealing is demonstrated. It is experimentally shown that the hierarchical three-level patterns, where the nanoparticles of sizes ranging from several nanometers to several tens of nanometer play a competitive roles in the electric conductivity, demonstrate sharp changes in the activation energy. These patterns can be used for the precise tuning of the resistivity-temperature behavior of nanoelectronic components.
Resumo:
Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.