8 resultados para robot sensing systems

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis has two basic themes: the investigation of new experiments which can be used to test relativistic gravity, and the investigation of new technologies and new experimental techniques which can be applied to make gravitational wave astronomy a reality.

Advancing technology will soon make possible a new class of gravitation experiments: pure laboratory experiments with laboratory sources of non-Newtonian gravity and laboratory detectors. The key advance in techno1ogy is the development of resonant sensing systems with very low levels of dissipation. Chapter 1 considers three such systems (torque balances, dielectric monocrystals, and superconducting microwave resonators), and it proposes eight laboratory experiments which use these systems as detectors. For each experiment it describes the dominant sources of noise and the technology required.

The coupled electro-mechanical system consisting of a microwave cavity and its walls can serve as a gravitational radiation detector. A gravitational wave interacts with the walls, and the resulting motion induces transitions from a highly excited cavity mode to a nearly unexcited mode. Chapter 2 describes briefly a formalism for analyzing such a detector, and it proposes a particular design.

The monitoring of a quantum mechanical harmonic oscillator on which a classical force acts is important in a variety of high-precision experiments, such as the attempt to detect gravitational radiation. Chapter 3 reviews the standard techniques for monitoring the oscillator; and it introduces a new technique which, in principle, can determine the details of the force with arbitrary accuracy, despite the quantum properties of the oscillator.

The standard method for monitoring the oscillator is the "amplitude- and-phase" method (position or momentum transducer with output fed through a linear amplifier). The accuracy obtainable by this method is limited by the uncertainty principle. To do better requires a measurement of the type which Braginsky has called "quantum nondemolition." A well-known quantum nondemolition technique is "quantum counting," which can detect an arbitrarily weak force, but which cannot provide good accuracy in determining its precise time-dependence. Chapter 3 considers extensively a new type of quantum nondemolition measurement - a "back-action-evading" measurement of the real part X1 (or the imaginary part X2) of the oscillator's complex amplitude. In principle X1 can be measured arbitrarily quickly and arbitrarily accurately, and a sequence of such measurements can lead to an arbitrarily accurate monitoring of the classical force.

Chapter 3 describes explicit gedanken experiments which demonstrate that X1 can be measured arbitrarily quickly and arbitrarily accurately, it considers approximate back-action-evading measurements, and it develops a theory of quantum nondemolition measurement for arbitrary quantum mechanical systems.

In Rosen's "bimetric" theory of gravity the (local) speed of gravitational radiation vg is determined by the combined effects of cosmological boundary values and nearby concentrations of matter. It is possible for vg to be less than the speed of light. Chapter 4 shows that emission of gravitational radiation prevents particles of nonzero rest mass from exceeding the speed of gravitational radiation. Observations of relativistic particles place limits on vg and the cosmological boundary values today, and observations of synchrotron radiation from compact radio sources place limits on the cosmological boundary values in the past.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the size of transistors approaching the sub-nanometer scale and Si-based photonics pinned at the micrometer scale due to the diffraction limit of light, we are unable to easily integrate the high transfer speeds of this comparably bulky technology with the increasingly smaller architecture of state-of-the-art processors. However, we find that we can bridge the gap between these two technologies by directly coupling electrons to photons through the use of dispersive metals in optics. Doing so allows us to access the surface electromagnetic wave excitations that arise at a metal/dielectric interface, a feature which both confines and enhances light in subwavelength dimensions - two promising characteristics for the development of integrated chip technology. This platform is known as plasmonics, and it allows us to design a broad range of complex metal/dielectric systems, all having different nanophotonic responses, but all originating from our ability to engineer the system surface plasmon resonances and interactions. In this thesis, we demonstrate how plasmonics can be used to develop coupled metal-dielectric systems to function as tunable plasmonic hole array color filters for CMOS image sensing, visible metamaterials composed of coupled negative-index plasmonic coaxial waveguides, and programmable plasmonic waveguide network systems to serve as color routers and logic devices at telecommunication wavelengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light has long been used for the precise measurement of moving bodies, but the burgeoning field of optomechanics is concerned with the interaction of light and matter in a regime where the typically weak radiation pressure force of light is able to push back on the moving object. This field began with the realization in the late 1960's that the momentum imparted by a recoiling photon on a mirror would place fundamental limits on the smallest measurable displacement of that mirror. This coupling between the frequency of light and the motion of a mechanical object does much more than simply add noise, however. It has been used to cool objects to their quantum ground state, demonstrate electromagnetically-induced-transparency, and modify the damping and spring constant of the resonator. Amazingly, these radiation pressure effects have now been demonstrated in systems ranging 18 orders of magnitude in mass (kg to fg).

In this work we will focus on three diverse experiments in three different optomechanical devices which span the fields of inertial sensors, closed-loop feedback, and nonlinear dynamics. The mechanical elements presented cover 6 orders of magnitude in mass (ng to fg), but they all employ nano-scale photonic crystals to trap light and resonantly enhance the light-matter interaction. In the first experiment we take advantage of the sub-femtometer displacement resolution of our photonic crystals to demonstrate a sensitive chip-scale optical accelerometer with a kHz-frequency mechanical resonator. This sensor has a noise density of approximately 10 micro-g/rt-Hz over a useable bandwidth of approximately 20 kHz and we demonstrate at least 50 dB of linear dynamic sensor range. We also discuss methods to further improve performance of this device by a factor of 10.

In the second experiment, we used a closed-loop measurement and feedback system to damp and cool a room-temperature MHz-frequency mechanical oscillator from a phonon occupation of 6.5 million down to just 66. At the time of the experiment, this represented a world-record result for the laser cooling of a macroscopic mechanical element without the aid of cryogenic pre-cooling. Furthermore, this closed-loop damping yields a high-resolution force sensor with a practical bandwidth of 200 kHZ and the method has applications to other optomechanical sensors.

The final experiment contains results from a GHz-frequency mechanical resonator in a regime where the nonlinearity of the radiation-pressure interaction dominates the system dynamics. In this device we show self-oscillations of the mechanical element that are driven by multi-photon-phonon scattering. Control of the system allows us to initialize the mechanical oscillator into a stable high-amplitude attractor which would otherwise be inaccessible. To provide context, we begin this work by first presenting an intuitive overview of optomechanical systems and then providing an extended discussion of the principles underlying the design and fabrication of our optomechanical devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Controlling iron distribution is important for all organisms, and is key in bacterial pathogenesis. It has long been understood that cystic fibrosis (CF) patient sputum contains elevated iron concentrations. However, anaerobic bacteria have been isolated from CF sputum and hypoxic zones in sputum have been measured. Because ferrous iron [Fe(II)] is stable in reducing, acidic conditions, it could exist in the CF lung. I show that a two-component system, BqsRS, specifically responds to Fe(II) in the CF pathogen, Pseudomonas aeruginosa. Concurrently, a clinical study found that Fe(II) is present in CF sputum at all stages of lung function decline. Fe(II), not Fe(III) correlates with patients in the most severe disease state. Furthermore, transcripts of the newly identified BqsRS were detected in sputum. Two component systems are the main method bacteria interact with their extracellular environment. A typical two-component system contains a sensor histidine kinase, which upon activation phosphorylates a response regulator that then acts as a transcription factor to elicit a cellular response to stimuli. To explore the mechanism of BqsRS, I describe the Fe(II)-sensing RExxE motif in the sensor BqsS and determine the consensus DNA sequence BqsR binds. With the BqsR binding sequence, I identify novel regulon members through bioinformatic and molecular biology techniques. From the predicted function of new BqsR regulon members, I find that Fe(II) elicits a response that globally protects the cells against cationic stressors, including clinically relevant antibiotics. Subsequently, I use BqsR as a case study to determine if promoter outputs can accurately be predicted based only on a deep understanding of a transcriptional activator’s operator or if a broader regulatory context is required for accurate predictions at all genomic loci. This work highlights the importance of Fe(II) as a (micro)environmental factor, even in conditions typically thought of as aerobic. Since the presence of Fe(II) can alter P. aeruginosa’s antibiotic susceptibility, combining the current strategy of targeting Fe(III) with a new approach targeting Fe(II) may help eradicate infections in the CF lung in the future.