733 resultados para Actuators.
Resumo:
With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.
All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.
We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.
Resumo:
We introduce an in vitro diagnostic magnetic biosensing platform for immunoassay and nucleic acid detection. The platform has key characteristics for a point-of-use (POU) diagnostic: portability, low-power consumption, low cost, and multiplexing capability. As a demonstration of capabilities, we use this platform for the room temperature, amplification-free detection of a 31 bp DNA oligomer and interferon-gamma (a protein relevant for tuberculosis diagnosis). Reliable assay measurements down to 100 pM for the DNA and 1 pM for the protein are demonstrated. We introduce a novel "magnetic freezing" technique for baseline measurement elimination and to enable spatial multiplexing. We have created a general protocol for adapting integrated circuit (IC) sensors to any of hundreds of commercially available immunoassay kits and custom designed DNA sequences.
We also introduce a method for immunotherapy treatment of malignant gliomas. We utilize leukocytes internalized with immunostimulatory nanoparticle-oligonucleotide conjugates to localize and retain immune cells near the tumor site. As a proof-of-principle, we develop a novel cell imaging and incubation chamber for in vitro magnetic motility experiments. We use the apparatus to demonstrate the controlled movement of magnetically loaded THP-1 leukocytes.
Finally, we introduce an IC transmitter and power ampli er (PA) that utilizes electronic digital infrastructure, sensors, and actuators to self-heal and adapt to process, dynamic, and environmental variation. Traditional IC design has achieved incredible degrees of reliability by ensuring that billions of transistors on a single IC die are all simultaneously functional. Reliability becomes increasingly difficult as the size of a transistor shrinks. Self-healing can mitigate these variations.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
提出一种新型的五自由度精密定位平台的工作原理及其设计方法。工作台采用压电陶瓷作为驱动元件,柔性导向机构实现平移及转动功能。整个工作台可由整块金属材料通过线切割加工制成,实现一体化加工,而且结构紧凑。并给出导向机构刚度计算公式及设计实例。
Resumo:
提出一种新型的五自由度精密定位平台的工作原理及其设计方法。工作台采用柔性导向机构实现平移及转动功能,采用压电陶瓷作为驱动元件,外置纳米级电容传感器作为位移量测量反馈元件,采用数字PID控制方法,可以实现纳米级精度的定位。给出了多种形式柔性导向机构刚度计算公式及设计实例。
Resumo:
In petawatt laser system, the gratings used to compose pulse compressor are very large in size which can be only acquired currently by arraying small aperture gratings to form a large one instead, an approach referred to as grating tiling. Theory and experiments have demonstrated that the coherent addition of multiple small gratings to form a larger grating is viable, the key technology of which is to control the relative position and orientation of each grating with high precision. According to the main factors that affect the performance of the grating tiling, a 5-DOF ultraprecision stage is developed for the grating tiling experiment. The mechanism is formed by serial structures. The motion of the mechanism is guided by flexure hinges and driven by piezoelectric actuators and the movement resolution of which can achieve nanometer level. To keep the stability of the mechanism, capacitive position sensors with nanometer accuracy are fixed on it to provide feedback signals with which to realize closed-loop control, thus the positioning precision of the mechanism is within several nanometers range through voltage control and digital PID algorithm. Results of experiments indicate that the performance of the mechanism can meet the requirement of precision for grating tiling.}
Resumo:
This paper deals with the convergence of a remote iterative learning control system subject to data dropouts. The system is composed by a set of discrete-time multiple input-multiple output linear models, each one with its corresponding actuator device and its sensor. Each actuator applies the input signals vector to its corresponding model at the sampling instants and the sensor measures the output signals vector. The iterative learning law is processed in a controller located far away of the models so the control signals vector has to be transmitted from the controller to the actuators through transmission channels. Such a law uses the measurements of each model to generate the input vector to be applied to its subsequent model so the measurements of the models have to be transmitted from the sensors to the controller. All transmissions are subject to failures which are described as a binary sequence taking value 1 or 0. A compensation dropout technique is used to replace the lost data in the transmission processes. The convergence to zero of the errors between the output signals vector and a reference one is achieved as the number of models tends to infinity.
Resumo:
[ES]En la situación actual, en que las empresas han tenido que automatizar los procesos a nivel mundial para hacer frente a los nuevos retos de la competitividad, pone de manifiesto la necesidad de nuevas tecnologías para innovar y redefinir sus procesos. Este proyecto se centra en la aplicación de las nuevas tecnologías en un proceso de laminación en caliente para así a aumentar la capacidad de producción y la calidad de la empresa. Para ello, en primer lugar, se analiza la planta y el proceso a automatizar, se señalan los problemas y se procede a estudiar la solución más adecuada. Después de seleccionar la solución, se colocan sensores y actuadores a lo largo del proceso en función de los pasos a seguir por la fabricación. Con todo ello se ha diseñado una secuencia de control para que el proceso sea autónomo. Además, se diseña un algoritmo para controlar el arranque de los motores, reduciendo así el consumo de energía. En conclusión, se desea mejorar un viejo proceso de producción a través de la automatización y las nuevas tecnologías. Breve descripción del trabajo (cinco líneas). Esta descripción debe destacar los puntos más relevantes del trabajo: su objetivo principal, los métodos a emplear para su desarrollo y los resultados que se pretenden conseguir, o que se han conseguido.
Resumo:
[Es]El objetivo principal de este Trabajo Fin de Grado consiste en calcular los movimientos que son necesarios en los actuadores de la plataforma de un mecanismo de cinemática paralela, a fin de poder localizar la pieza en la posición adecuada para poder llevar a cabo la operación de microfresado de la misma. Para desarrollar el proyecto se necesitará un software de programación como es Matlab. Este trabajo surge de la necesidad de dar soporte a un proyecto mayor que consiste en el diseño de un manipulador de cinemática paralela cuyas juntas funcionan por deformación. Se pretende que mientras la herramienta se encuentra inmóvil, se consiga el microfresado de moldes para la fabricación de microlentes mediante el movimiento del manipulador. Se ha resuelto la cinemática inversa y se ha calculado el espacio de trabajo. En este documento se van a presentar las tareas, el presupuesto y los riesgos del proyecto así como unos anexos en los que se incluirá el código de la programación.