10 resultados para Control applications

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis, dry chemical modification methods involving UV/ozone, oxygen plasma, and vacuum annealing treatments are explored to precisely control the wettability of CNT arrays. By varying the exposure time of these treatments the surface concentration of oxygenated groups adsorbed on the CNT arrays can be controlled. CNT arrays with very low amount of oxygenated groups exhibit a superhydrophobic behavior. In addition to their extremely high static contact angle, they cannot be dispersed in DI water and their impedance in aqueous electrolytes is extremely high. These arrays have an extreme water repellency capability such that a water droplet will bounce off of their surface upon impact and a thin film of air is formed on their surface as they are immersed in a deep pool of water. In contrast, CNT arrays with very high surface concentration of oxygenated functional groups exhibit an extreme hydrophilic behavior. In addition to their extremely low static contact angle, they can be dispersed easily in DI water and their impedance in aqueous electrolytes is tremendously low. Since the bulk structure of the CNT arrays are preserved during the UV/ozone, oxygen plasma, and vacuum annealing treatments, all CNT arrays can be repeatedly switched between superhydrophilic and superhydrophobic, as long as their O/C ratio is kept below 18%.

The effect of oxidation using UV/ozone and oxygen plasma treatments is highly reversible as long as the O/C ratio of the CNT arrays is kept below 18%. At O/C ratios higher than 18%, the effect of oxidation is no longer reversible. This irreversible oxidation is caused by irreversible changes to the CNT atomic structure during the oxidation process. During the oxidation process, CNT arrays undergo three different processes. For CNT arrays with O/C ratios lower than 40%, the oxidation process results in the functionalization of CNT outer walls by oxygenated groups. Although this functionalization process introduces defects, vacancies and micropores opening, the graphitic structure of the CNT is still largely intact. For CNT arrays with O/C ratios between 40% and 45%, the oxidation process results in the etching of CNT outer walls. This etching process introduces large scale defects and holes that can be obviously seen under TEM at high magnification. Most of these holes are found to be several layers deep and, in some cases, a large portion of the CNT side walls are cut open. For CNT arrays with O/C ratios higher than 45%, the oxidation process results in the exfoliation of the CNT walls and amorphization of the remaining CNT structure. This amorphization process can be implied from the disappearance of C-C sp2 peak in the XPS spectra associated with the pi-bond network.

The impact behavior of water droplet impinging on superhydrophobic CNT arrays in a low viscosity regime is investigated for the first time. Here, the experimental data are presented in the form of several important impact behavior characteristics including critical Weber number, volume ratio, restitution coefficient, and maximum spreading diameter. As observed experimentally, three different impact regimes are identified while another impact regime is proposed. These regimes are partitioned by three critical Weber numbers, two of which are experimentally observed. The volume ratio between the primary and the secondary droplets is found to decrease with the increase of Weber number in all impact regimes other than the first one. In the first impact regime, this is found to be independent of Weber number since the droplet remains intact during and subsequent to the impingement. Experimental data show that the coefficient of restitution decreases with the increase of Weber number in all impact regimes. The rate of decrease of the coefficient of restitution in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Experimental data also show that the maximum spreading factor increases with the increase of Weber number in all impact regimes. The rate of increase of the maximum spreading factor in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Phenomenological approximations and interpretations of the experimental data, as well as brief comparisons to the previously proposed scaling laws, are shown here.

Dry oxidation methods are used for the first time to characterize the influence of oxidation on the capacitive behavior of CNT array EDLCs. The capacitive behavior of CNT array EDLCs can be tailored by varying their oxygen content, represented by their O/C ratio. The specific capacitance of these CNT arrays increases with the increase of their oxygen content in both KOH and Et4NBF4/PC electrolytes. As a result, their gravimetric energy density increases with the increase of their oxygen content. However, their gravimetric power density decreases with the increase of their oxygen content. The optimally oxidized CNT arrays are able to withstand more than 35,000 charge/discharge cycles in Et4NBF4/PC at a current density of 5 A/g while only losing 10% of their original capacitance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.

To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.

In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.

We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.

We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.

We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation covers progress with bimetallic polymerization catalysts. The complexes we have designed were aimed at expanding the capabilities of homogeneous polymerization catalysts by taking advantage of multimetallic effects. Such effects were examined in group 4 and group 10 bimetallic complexes; proximity and steric repulsion were determined to be major factors in the effects observed.

Chapters 2 and 3 introduce the rigid p-terphenyl dinucleating framework utilized in most of this thesis. The permethylation of the central arene allows for the separation of syn and anti atropisomers of the terphenyl compounds. Kinetic studies were carried out to examine the isomerization of the dinucleating bis(salicylaldimine) ligand precursors. Metallation of the syn and anti bis(salicylaldimine)s using Ni(Me)2(tmeda) and excess pyridine afforded dinickel bisphenoxyiminato complexes with a methyl and a pyridyl ligand on each nickel. The syn and anti atropisomers of the dinickel complexes were structurally characterized and utilized in ethylene and ethylene/α-olefin polymerizations. Monometallic analogues were also synthesized and tested for polymerization activity. Ethylene polymerizations were performed in the presence of primary, secondary, and tertiary amines – additives that generally deactivate nickel polymerization catalysts. Inhibition of this deactivation was observed with the syn atropisomer of the bimetallic species, but not with the anti or monometallic analogues. A mechanism was proposed wherein steric repulsion of the substituents on proximal nickel centers disfavors simultaneous ligation of base to both of the metal centers. The bimetallic effect has been explored with respect to size and binding ability of the added base.

Chapter 4 presents the optimization of the bisphenoxyimine ligand synthesis and synthesis of syn and anti m-terphenyl analogues. Metallation with NiClMe(PMe3)2 yielded phosphine-ligated dinickel complexes, which have been structurally characterized. Ethylene/1-hexene copolymerizations in the presence of amines using Ni(COD)2 as a phosphine scavenger showed significantly improved activity relative to the pyridine-ligated analogues. Incorporation of amino olefins in copolymerizations with ethylene was accomplished, and a mechanism was proposed based on proximal effects. Copolymerization trials with a variety of amino olefins and ethylene/1-hexene/amino olefin terpolymerizations were completed.

Early transition metal complexes based on the rigid p-terphenyl framework were designed with a variety of donor sets (Chapter 5 and Appendix B). Chapter 5 details the use of syn dizirconium di[amine bis(phenolate)] complexes for isoselective 1-hexene and propylene homopolymerizations. Ligand variation and monometallic complexes were studied to determine the origin of tacticity control. A mechanistic proposal was presented based on the symmetry at zirconium and the steric effects of the proximal metal center. Appendix B covers additional studies of bimetallic early transition metal complexes based on the p-terphenyl. Dititanium, dizirconium, and asymmetric complexes with bisphenoxyiminato ligands and derivatives thereof were targeted. Progress toward the synthesis of these complexes is described along with preliminary polymerization data. 1-hexene/diene copolymerizations and attempted polymerizations in the presence of ethers and esters with the syn dizirconium di[amine bis(phenolate)] complexes demonstrate the potential for further applications of this system in catalysis.

Appendix A includes work toward palladium catalysts for insertion polymerization of polar monomers. These complexes were based on dioxime and diimine frameworks with the intent of binding Lewis acidic metals at the oxime oxygens, at pendant phenolic donors, or at pendant aminediol moieties. The synthesis and structural characterization of a number of palladium and Lewis acid complexes is presented. Due to the instability of the desired species, efforts toward isolation of the desired complexes proved unsuccessful, though preliminary ethylene/methyl acrylate copolymerizations using in situ activation of the palladium species were attempted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.