976 resultados para Embedded Control Architectures
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
150 p.
Resumo:
Ternary CoNiP nanowire (NW) arrays have been synthesized by electrochemical deposition inside the nanochannels of anodic aluminum oxide (AAO) template. The CoNiP NWs deposited at room temperature present soft magnetic properties, with both parallel and perpendicular coercivities less than 500 Oe. In contrast, as the electrolyte temperature (T-elc) increases from 323 to 343 K, the NWs exhibit hard magnetic properties with coercivities in the range of 1000-2500 Oe. This dramatic increase in coercivities can be attributed to the domain wall pinning that is related to the formation of Ni and Co nanocrystallites and the increase of P content. The parallel coercivity (i.e. the applied field perpendicular to the membrane surface) maximum as high as 2500 Oe with squareness ratio up to 0.8 is achieved at the electrolyte temperature of 328 K. It has been demonstrated that the parallel coercivity of CoNiP NWs can be tuned in a wide range of 200-2500 Oe by controlling the electrolyte temperature, providing an easy way to control magnetic properties and thereby for their integration with magnetic-micro-electromechanical systems (MEMS). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The evolution of the railway sector depends, to a great extent, on the deployment of advanced railway signalling systems. These signalling systems are based on communication architectures that must cope with complex electromagnetical environments. This paper is outlined in the context of developing the necessary tools to allow the quick deployment of these signalling systems by contributing to an easier analysis of their behaviour under the effect of electromagnetical interferences. Specifically, this paper presents the modelling of the Eurobalise-train communication flow in a general purpose simulation tool. It is critical to guarantee this communication link since any lack of communication may lead to a stop of the train and availability problems. In order to model precisely this communication link we used real measurements done in a laboratory equipped with elements defined in the suitable subsets. Through the simulation study carried out, we obtained performance indicators of the physical layer such as the received power, SNR and BER. The modelling presented in this paper is a required step to be able to provide quality of service indicators related to perturbed scenarios.
Resumo:
A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit. ©2005 Copyright SPIE - The International Society for Optical Engineering.
Resumo:
New embedded predictive control applications call for more eficient ways of solving quadratic programs (QPs) in order to meet demanding real-time, power and cost requirements. A single precision QP-on-a-chip controller is proposed, implemented in afield-programmable gate array (FPGA) with an iterative linear solver at its core. A novel offline scaling procedure is introduced to aid the convergence of the reduced precision solver. The feasibility of the proposed approach is demonstrated with a real-time hardware-in-the-loop (HIL) experimental setup where an ML605 FPGA board controls a nonlinear model of a Boeing 747 aircraft running on a desktop PC through an Ethernet link. Simulations show that the quality of the closed-loop control and accuracy of individual solutions is competitive with a conventional double precision controller solving linear systems using a Riccati recursion. © 2012 IFAC.
Resumo:
The solution time of the online optimization problems inherent to Model Predictive Control (MPC) can become a critical limitation when working in embedded systems. One proposed approach to reduce the solution time is to split the optimization problem into a number of reduced order problems, solve such reduced order problems in parallel and selecting the solution which minimises a global cost function. This approach is known as Parallel MPC. The potential capabilities of disturbance rejection are introduced using a simulation example. The algorithm is implemented in a linearised model of a Boeing 747-200 under nominal flight conditions and with an induced wind disturbance. Under significant output disturbances Parallel MPC provides a significant improvement in performance when compared to Multiplexed MPC (MMPC) and Linear Quadratic Synchronous MPC (SMPC). © 2013 IEEE.
Resumo:
Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd. Summary A field programmable gate array (FPGA) based model predictive controller for two phases of spacecraft rendezvous is presented. Linear time-varying prediction models are used to accommodate elliptical orbits, and a variable prediction horizon is used to facilitate finite time completion of the longer range manoeuvres, whilst a fixed and receding prediction horizon is used for fine-grained tracking at close range. The resulting constrained optimisation problems are solved using a primal-dual interior point algorithm. The majority of the computational demand is in solving a system of simultaneous linear equations at each iteration of this algorithm. To accelerate these operations, a custom circuit is implemented, using a combination of Mathworks HDL Coder and Xilinx System Generator for DSP, and used as a peripheral to a MicroBlaze soft-core processor on the FPGA, on which the remainder of the system is implemented. Certain logic that can be hard-coded for fixed sized problems is implemented to be configurable online, in order to accommodate the varying problem sizes associated with the variable prediction horizon. The system is demonstrated in closed-loop by linking the FPGA with a simulation of the spacecraft dynamics running in Simulink on a PC, using Ethernet. Timing comparisons indicate that the custom implementation is substantially faster than pure embedded software-based interior point methods running on the same MicroBlaze and could be competitive with a pure custom hardware implementation.
Resumo:
Low power design method is used in a 100MHz embedded SRAM. The embedded SRAM used in a FFT chip is divided into 16 blocks. Two-level decoders are used and only one block can be selected at one time by tristate control circuits, while other blocks are set stand-by. The SRAM cell has been optimized and the cell area has been minimized at the same time.
Resumo:
How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? In response to this question we propose a framework for viewing issues of knowledge indexing and retrieval, a framework that includes what appears to be a useful perspective on the concept of a strategy. We view strategies as a means of controlling invocation in situations where traditional selection mechanisms become ineffective. We examine ways to effect such control, and describe meta-rules, a means of specifying strategies which offers a number of advantages. We consider at some length how and when it is useful to reason about control, and explore the advantages meta-rules offer for doing this.
Resumo:
Literature on the nonprofit sector focuses on charities and their interactions with clients or governmental agencies; donors are studied less often. Studies on philanthropy do examine donors but tend to focus on microlevel factors to explain their behavior. This study, in contrast, draws on institutional theory to show that macrolevel factors affect donor behavior. It also extends the institutional framework by examining the field‐level configurations in which donors and fundraisers are embedded. Employing the case of workplace charity, this new model highlights how the composition of the organizational field structures fundraisers and donors alike, shaping fundraisers’ strategies of solicitation and, therefore, the extent of donor control.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
The radiative processes associated with fluorophores and other radiating systems can be profoundly modified by their interaction with nanoplasmonic structures. Extreme electromagnetic environments can be created in plasmonic nanostructures or nanocavities, such as within the nanoscale gap region between two plasmonic nanoparticles, where the illuminating optical fields and the density of radiating modes are dramatically enhanced relative to vacuum. Unraveling the various mechanisms present in such coupled systems, and their impact on spontaneous emission and other radiative phenomena, however, requires a suitably reliable and precise means of tuning the plasmon resonance of the nanostructure while simultaneously preserving the electromagnetic characteristics of the enhancement region. Here, we achieve this control using a plasmonic platform consisting of colloidally synthesized nanocubes electromagnetically coupled to a metallic film. Each nanocube resembles a nanoscale patch antenna (or nanopatch) whose plasmon resonance can be changed independent of its local field enhancement. By varying the size of the nanopatch, we tune the plasmonic resonance by ∼ 200 nm, encompassing the excitation, absorption, and emission spectra corresponding to Cy5 fluorophores embedded within the gap region between nanopatch and film. By sweeping the plasmon resonance but keeping the field enhancements roughly fixed, we demonstrate fluorescence enhancements exceeding a factor of 30,000 with detector-limited enhancements of the spontaneous emission rate by a factor of 74. The experiments are supported by finite-element simulations that reveal design rules for optimized fluorescence enhancement or large Purcell factors.
Resumo:
Embedded electronic systems in vehicles are of rapidly increasing commercial importance for the automotive industry. While current vehicular embedded systems are extremely limited and static, a more dynamic configurable system would greatly simplify the integration work and increase quality of vehicular systems. This brings in features like separation of concerns, customised software configuration for individual vehicles, seamless connectivity, and plug-and-play capability. Furthermore, such a system can also contribute to increased dependability and resource optimization due to its inherent ability to adjust itself dynamically to changes in software, hardware resources, and environment condition. This paper describes the architectural approach to achieving the goals of dynamically self-configuring automotive embedded electronic systems by the EU research project DySCAS. The architecture solution outlined in this paper captures the application and operational contexts, expected features, middleware services, functions and behaviours, as well as the basic mechanisms and technologies. The paper also covers the architecture conceptualization by presenting the rationale, concerning the architecture structuring, control principles, and deployment concept. In this paper, we also present the adopted architecture V&V strategy and discuss some open issues in regards to the industrial acceptance.
Resumo:
The extent and gravity of the environmental degradation of the water resources in Dhaka due to untreated industrial waste is not fully recognised in international discourse. Pollution levels affect vast numbers, but the poor and the vulnerable are the worst affected. For example, rice productivity, the mainstay of poor farmers, in the Dhaka watershed has declined by 40% over a period of ten years. The study found significant correlations between water pollution and diseases such as jaundice, diarrhoea and skin problems. It was reported that the cost of treatment of one episode of skin disease could be as high as 29% of the weekly earnings of some of the poorest households. The dominant approach to deal with pollution in the SMEs is technocratic. Given the magnitude of the problem this paper argues that to control industrial pollution by SMEs and to enhance their compliance it is necessary to move from the technocratic approach to one which can also address the wider institutional and attitudinal issues. Underlying this shift is the need to adopt the appropriate methodology. The multi-stakeholder analysis enables an understanding of the actors, their influence, their capacity to participate in, or oppose change, and the existing and embedded incentive structures which allow them to pursue interests which are generally detrimental to environmental good. This enabled core and supporting strategies to be developed around three types of actors in industrial pollution, i.e., (i) principal actors, who directly contribute to industrial pollution; (ii) stakeholders who exacerbate the situation; and (iii) potential actors in mitigation. Within a carrot-and-stick framework, the strategies aim to improve environmental governance and transparency, set up a packet to incentive for industry and increase public awareness.