13 resultados para Filters and filtration

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.

A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.

The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.

Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.

The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.

Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.

PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes.

The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology.

This dissertation presents the development of a KID-based instrument including a portion of the millimeter-wave bandpass filters and all aspects of the readout electronics, which together enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this dissertation has been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a new instrument for the Caltech Submillimeter Observatory (CSO).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with spatial filtering. What is its utility in tone reproduction? Does it exist in vision, and if so, what constraints does it impose on the nervous system?

Tone reproduction is just the art and science of taking a picture and then displaying it. The sensors available to capture an image have a greater dynamic range than the media that may be used to display it. Conventionally, spatial filtering is used to boost contrast; it ameliorates the loss of contrast that results when the sensor signal range is scaled down to fit the display range. In this thesis, a type of nonlinear spatial filtering is discussed that results in direct range reduction without range scaling. This filtering process is instantiated in a real-time image processor built using analog CMOS VLSI.

Spatial filtering must be applied with care in both artificial and natural vision systems. It is argued that the nervous system does not simply filter linearly across an image. Rather, the way that we see things implies that the nervous system filters nonlinearly. Further, many models for color vision include a high-pass filtering step in which the DC information is lost. A real-time study of filtering in color space leads to the conclusion that the nervous system is not that simple, and that it maintains DC information by referencing to white.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the size of transistors approaching the sub-nanometer scale and Si-based photonics pinned at the micrometer scale due to the diffraction limit of light, we are unable to easily integrate the high transfer speeds of this comparably bulky technology with the increasingly smaller architecture of state-of-the-art processors. However, we find that we can bridge the gap between these two technologies by directly coupling electrons to photons through the use of dispersive metals in optics. Doing so allows us to access the surface electromagnetic wave excitations that arise at a metal/dielectric interface, a feature which both confines and enhances light in subwavelength dimensions - two promising characteristics for the development of integrated chip technology. This platform is known as plasmonics, and it allows us to design a broad range of complex metal/dielectric systems, all having different nanophotonic responses, but all originating from our ability to engineer the system surface plasmon resonances and interactions. In this thesis, we demonstrate how plasmonics can be used to develop coupled metal-dielectric systems to function as tunable plasmonic hole array color filters for CMOS image sensing, visible metamaterials composed of coupled negative-index plasmonic coaxial waveguides, and programmable plasmonic waveguide network systems to serve as color routers and logic devices at telecommunication wavelengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with a general analysis of wave interactions in periodic structures and particularly periodic thin film dielectric waveguides.

The electromagnetic wave propagation in an asymmetric dielectric waveguide with a periodically perturbed surface is analyzed in terms of a Floquet mode solution. First order approximate analytical expressions for the space harmonics are obtained. The solution is used to analyze various applications: (1) phase matched second harmonic generation in periodically perturbed optical waveguides; (2) grating couplers and thin film filters; (3) Bragg reflection devices; (4) the calculation of the traveling wave interaction impedance for solid state and vacuum tube optical traveling wave amplifiers which utilize periodic dielectric waveguides. Some of these applications are of interest in the field of integrated optics.

A special emphasis is put on the analysis of traveling wave interaction between electrons and electromagnetic waves in various operation regimes. Interactions with a finite temperature electron beam at the collision-dominated, collisionless, and quantum regimes are analyzed in detail assuming a one-dimensional model and longitudinal coupling.

The analysis is used to examine the possibility of solid state traveling wave devices (amplifiers, modulators), and some monolithic structures of these devices are suggested, designed to operate at the submillimeter-far infrared frequency regime. The estimates of attainable traveling wave interaction gain are quite low (on the order of a few inverse centimeters). However, the possibility of attaining net gain with different materials, structures and operation condition is not ruled out.

The developed model is used to discuss the possibility and the theoretical limitations of high frequency (optical) operation of vacuum electron beam tube; and the relation to other electron-electromagnetic wave interaction effects (Smith-Purcell and Cerenkov radiation and the free electron laser) are pointed out. Finally, the case where the periodic structure is the natural crystal lattice is briefly discussed. The longitudinal component of optical space harmonics in the crystal is calculated and found to be of the order of magnitude of the macroscopic wave, and some comments are made on the possibility of coherent bremsstrahlung and distributed feedback lasers in single crystals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this work describes the uses of aperiodic structures in optics and integrated optics. In particular, devices are designed, fabricated, tested and analyzed which make use of a chirped grating corrugation on the surface of a dielectric waveguide. These structures can be used as input-output couplers, multiplexers and demultiplexers, and broad band filters.

Next, a theoretical analysis is made of the effects of a random statistical variation in the thicknesses of layers in a dielectric mirror on its reflectivity properties. Unlike the intentional aperiodicity introduced in the chirped gratings, the aperiodicity in the Bragg reflector mirrors is unintentional and is present to some extent in all devices made. The analysis involved in studying these problems relies heavily on the coupled mode formalism. The results are compared with computer experiments, as well as tests of actual mirrors.

The second part of this work describes a novel method for confining light in the transverse direction in an injection laser. These so-called transverse Bragg reflector lasers confine light normal to the junction plane in the active region, through reflection from an adjacent layered medium. Thus, in principle, it is possible to guide light in a dielectric layer whose index is lower than that of the surrounding material. The design, theory and testing of these diode lasers are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.

The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Huntington’s disease (HD) is a fatal autosomal dominant neurodegenerative disease. HD has no cure, and patients pass away 10-20 years after the onset of symptoms. The causal mutation for HD is a trinucleotide repeat expansion in exon 1 of the huntingtin gene that leads to a polyglutamine (polyQ) repeat expansion in the N-terminal region of the huntingtin protein. Interestingly, there is a threshold of 37 polyQ repeats under which little or no disease exists; and above which, patients invariably show symptoms of HD. The huntingtin protein is a 350 kDa protein with unclear function. As the polyQ stretch expands, its propensity to aggregate increases with polyQ length. Models for polyQ toxicity include formation of aggregates that recruit and sequester essential cellular proteins, or altered function producing improper interactions between mutant huntingtin and other proteins. In both models, soluble expanded polyQ may be an intermediate state that can be targeted by potential therapeutics.

In the first study described herein, the conformation of soluble, expanded polyQ was determined to be linear and extended using equilibrium gel filtration and small-angle X-ray scattering. While attempts to purify and crystallize domains of the huntingtin protein were unsuccessful, the aggregation of huntingtin exon 1 was investigated using other biochemical techniques including dynamic light scattering, turbidity analysis, Congo red staining, and thioflavin T fluorescence. Chapter 4 describes crystallization experiments sent to the International Space Station and determination of the X-ray crystal structure of the anti-polyQ Fab MW1. In the final study, multimeric fibronectin type III (FN3) domain proteins were engineered to bind with high avidity to expanded polyQ tracts in mutant huntingtin exon 1. Surface plasmon resonance was used to observe binding of monomeric and multimeric FN3 proteins with huntingtin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model for some of the many physical-chemical and biological processes in intermittent sand filtration of wastewaters is described and an expression for oxygen transfer is formulated.

The model assumes that aerobic bacterial activity within the sand or soil matrix is limited, mostly by oxygen deficiency, while the surface is ponded with wastewater. Atmospheric oxygen reenters into the soil after infiltration ends. Aerobic activity is resumed, but the extent of penetration of oxygen is limited and some depths may be always anaerobic. These assumptions lead to the conclusion that the percolate shows large variations with respect to the concentration of certain contaminants, with some portions showing little change in a specific contaminant. Analyses of soil moisture in field studies and of effluent from laboratory sand columns substantiated the model.

The oxygen content of the system at sufficiently long times after addition of wastes can be described by a quasi-steady-state diffusion equation including a term for an oxygen sink. Measurements of oxygen content during laboratory and field studies show that the oxygen profile changes only slightly up to two days after the quasi-steady state is attained.

Results of these hypotheses and experimental verification can be applied in the operation of existing facilities and in the interpretation of data from pilot plant-studies.