5 resultados para automated full waveform logging system
em CaltechTHESIS
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
Adaptive optics (AO) corrects distortions created by atmospheric turbulence and delivers diffraction-limited images on ground-based telescopes. The vastly improved spatial resolution and sensitivity has been utilized for studying everything from the magnetic fields of sunspots upto the internal dynamics of high-redshift galaxies. This thesis about AO science from small and large telescopes is divided into two parts: Robo-AO and magnetar kinematics.
In the first part, I discuss the construction and performance of the world’s first fully autonomous visible light AO system, Robo-AO, at the Palomar 60-inch telescope. Robo-AO operates extremely efficiently with an overhead < 50s, typically observing about 22 targets every hour. We have performed large AO programs observing a total of over 7,500 targets since May 2012. In the visible band, the images have a Strehl ratio of about 10% and achieve a contrast of upto 6 magnitudes at a separation of 1′′. The full-width at half maximum achieved is 110–130 milli-arcsecond. I describe how Robo-AO is used to constrain the evolutionary models of low-mass pre-main-sequence stars by measuring resolved spectral energy distributions of stellar multiples in the visible band, more than doubling the current sample. I conclude this part with a discussion of possible future improvements to the Robo-AO system.
In the second part, I describe a study of magnetar kinematics using high-resolution near-infrared (NIR) AO imaging from the 10-meter Keck II telescope. Measuring the proper motions of five magnetars with a precision of upto 0.7 milli-arcsecond/yr, we have more than tripled the previously known sample of magnetar proper motions and proved that magnetar kinematics are equivalent to those of radio pulsars. We conclusively showed that SGR 1900+14 and SGR 1806-20 were ejected from the stellar clusters with which they were traditionally associated. The inferred kinematic ages of these two magnetars are 6±1.8 kyr and 650±300 yr respectively. These ages are a factor of three to four times greater than their respective characteristic ages. The calculated braking index is close to unity as compared to three for the vacuum dipole model and 2.5-2.8 as measured for young pulsars. I conclude this section by describing a search for NIR counterparts of new magnetars and a future promise of polarimetric investigation of a magnetars’ NIR emission mechanism.
Resumo:
A study of human eye movements was made in order to elucidate the nature of the control mechanism in the binocular oculomotor system.
We first examined spontaneous eye movements during monocular and binocular fixation in order to determine the corrective roles of flicks and drifts. It was found that both types of motion correct fixational errors, although flicks are somewhat more active in this respect. Vergence error is a stimulus for correction by drifts but not by flicks, while binocular vertical discrepancy of the visual axes does not trigger corrective movements.
Second, we investigated the non-linearities of the oculomotor system by examining the eye movement responses to point targets moving in two dimensions in a subjectively unpredictable manner. Such motions consisted of hand-limited Gaussian random motion and also of the sum of several non-integrally related sinusoids. We found that there is no direct relationship between the phase and the gain of the oculomotor system. Delay of eye movements relative to target motion is determined by the necessity of generating a minimum afferent (input) signal at the retina in order to trigger corrective eye movements. The amplitude of the response is a function of the biological constraints of the efferent (output) portion of the system: for target motions of narrow bandwidth, the system responds preferentially to the highest frequency; for large bandwidth motions, the system distributes the available energy equally over all frequencies. Third, the power spectra of spontaneous eye movements were compared with the spectra of tracking eye movements for Gaussian random target motions of varying bandwidths. It was found that there is essentially no difference among the various curves. The oculomotor system tracks a target, not by increasing the mean rate of impulses along the motoneurons of the extra-ocular muscles, but rather by coordinating those spontaneous impulses which propagate along the motoneurons during stationary fixation. Thus, the system operates at full output at all times.
Fourth, we examined the relative magnitude and phase of motions of the left and the right visual axes during monocular and binocular viewing. We found that the two visual axes move vertically in perfect synchronization at all frequencies for any viewing condition. This is not true for horizontal motions: the amount of vergence noise is highest for stationary fixation and diminishes for tracking tasks as the bandwidth of the target motion increases. Furthermore, movements of the occluded eye are larger than those of the seeing eye in monocular viewing. This effect is more pronounced for horizontal motions, for stationary fixation, and for lower frequencies.
Finally, we have related our findings to previously known facts about the pertinent nerve pathways in order to postulate a model for the neurological binocular control of the visual axes.
Resumo:
Cancer chemotherapy has advanced from highly toxic drugs to more targeted treatments in the last 70 years. Chapter 1 opens with an introduction to targeted therapy for cancer. The benefits of using a nanoparticle to deliver therapeutics are discussed. We move on to siRNA in particular, and why it would be advantageous as a therapy. Specific to siRNA delivery are some challenges, such as nuclease degradation, quick clearance from circulation, needing to enter cells, and getting to the cytosol. We propose the development of a nanoparticle delivery system to tackle these challenges so that siRNA can be effective.
Chapter 2 of this thesis discusses the synthesis and analysis of a cationic mucic acid polymer (cMAP) which condenses siRNA to form a nanoparticle. Various methods to add polyethylene glycol (PEG) for stabilizing the nanoparticle in physiologic solutions, including using a boronic acid binding to diols on mucic acid, forming a copolymer of cMAP with PEG, and creating a triblock with mPEG on both ends of cMAP. The goal of these various pegylation strategies was to increase the circulation time of the siRNA nanoparticle in the bloodstream to allow more of the nanoparticle to reach tumor tissue by the enhanced permeation and retention effect. We found that the triblock mPEG-cMAP-PEGm polymer condensed siRNA to form very stable 30-40 nm particles that circulated for the longest time – almost 10% of the formulation remained in the bloodstream of mice 1 h after intravenous injection.
Chapter 3 explores the use of an antibody as a targeting agent for nanoparticles. Some antibodies of the IgG1 subtype are able to recruit natural killer cells that effect antibody dependent cellular cytotoxicity (ADCC) to kill the targeted cell to which the antibody is bound. There is evidence that the ADCC effect remains in antibody-drug conjugates, so we wanted to know whether the ADCC effect is preserved when the antibody is bound to a nanoparticle, which is a much larger and complex entity. We utilized antibodies against epidermal growth factor receptor with similar binding and pharmacokinetics, cetuximab and panitumumab, which differ in that cetuximab is an IgG1 and panitumumab is an IgG2 (which does not cause ADCC). Although a natural killer cell culture model showed that gold nanoparticles with a full antibody targeting agent can elicit target cell lysis, we found that this effect was not preserved in vivo. Whether this is due to the antibody not being accessible to immune cells or whether the natural killer cells are inactivated in a tumor xenograft remains unknown. It is possible that using a full antibody still has value if there are immune functions which are altered in a complex in vivo environment that are intact in an in vitro system, so the value of using a full antibody as a targeting agent versus using an antibody fragment or a protein such as transferrin is still open to further exploration.
In chapter 4, nanoparticle targeting and endosomal escape are further discussed with respect to the cMAP nanoparticle system. A diboronic acid entity, which gives an order of magnitude greater binding (than boronic acid) to cMAP due to the vicinal diols in mucic acid, was synthesized, attached to 5kD or 10kD PEG, and conjugated to either transferrin or cetuximab. A histidine was incorporated into the triblock polymer between cMAP and the PEG blocks to allow for siRNA endosomal escape. Nanoparticle size remained 30-40 nm with a slightly negative ca. -3 mV zeta potential with the triblock polymer containing histidine and when targeting agents were added. Greater mRNA knockdown was seen with the endosomal escape mechanism than without. The nanoparticle formulations were able to knock down the targeted mRNA in vitro. Mixed effects suggesting function were seen in vivo.
Chapter 5 summarizes the project and provides an outlook on siRNA delivery as well as targeted combination therapies for the future of personalized medicine in cancer treatment.