17 resultados para Employés expatriés
Resumo:
Forced vibration field tests and finite element studies have been conducted on Morrow Point (arch) Dam in order to investigate dynamic dam-water interaction and water compressibility. Design of the data acquisition system incorporates several special features to retrieve both amplitude and phase of the response in a low signal to noise environment. These features contributed to the success of the experimental program which, for the first time, produced field evidence of water compressibility; this effect seems to play a significant role only in the symmetric response of Morrow Point Dam in the frequency range examined. In the accompanying analysis, frequency response curves for measured accelerations and water pressures as well as their resonating shapes are compared to predictions from the current state-of-the-art finite element model for which water compressibility is both included and neglected. Calibration of the numerical model employs the antisymmetric response data since they are only slightly affected by water compressibility, and, after calibration, good agreement to the data is obtained whether or not water compressibility is included. In the effort to reproduce the symmetric response data, on which water compressibility has a significant influence, the calibrated model shows better correlation when water compressibility is included, but the agreement is still inadequate. Similar results occur using data obtained previously by others at a low water level. A successful isolation of the fundamental water resonance from the experimental data shows significantly different features from those of the numerical water model, indicating possible inaccuracy in the assumed geometry and/or boundary conditions for the reservoir. However, the investigation does suggest possible directions in which the numerical model can be improved.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
Light microscopy has been one of the most common tools in biological research, because of its high resolution and non-invasive nature of the light. Due to its high sensitivity and specificity, fluorescence is one of the most important readout modes of light microscopy. This thesis presents two new fluorescence microscopic imaging techniques: fluorescence optofluidic microscopy and fluorescent Talbot microscopy. The designs of the two systems are fundamentally different from conventional microscopy, which makes compact and portable devices possible. The components of the devices are suitable for mass-production, making the microscopic imaging system more affordable for biological research and clinical diagnostics.
Fluorescence optofluidic microscopy (FOFM) is capable of imaging fluorescent samples in fluid media. The FOFM employs an array of Fresnel zone plates (FZP) to generate an array of focused light spots within a microfluidic channel. As a sample flows through the channel and across the array of focused light spots, a filter-coated CMOS sensor collects the fluorescence emissions. The collected data can then be processed to render a fluorescence microscopic image. The resolution, which is determined by the focused light spot size, is experimentally measured to be 0.65 μm.
Fluorescence Talbot microscopy (FTM) is a fluorescence chip-scale microscopy technique that enables large field-of-view (FOV) and high-resolution imaging. The FTM method utilizes the Talbot effect to project a grid of focused excitation light spots onto the sample. The sample is placed on a filter-coated CMOS sensor chip. The fluorescence emissions associated with each focal spot are collected by the sensor chip and are composed into a sparsely sampled fluorescence image. By raster scanning the Talbot focal spot grid across the sample and collecting a sequence of sparse images, a filled-in high-resolution fluorescence image can be reconstructed. In contrast to a conventional microscope, a collection efficiency, resolution, and FOV are not tied to each other for this technique. The FOV of FTM is directly scalable. Our FTM prototype has demonstrated a resolution of 1.2 μm, and the collection efficiency equivalent to a conventional microscope objective with a 0.70 N.A. The FOV is 3.9 mm × 3.5 mm, which is 100 times larger than that of a 20X/0.40 N.A. conventional microscope objective. Due to its large FOV, high collection efficiency, compactness, and its potential for integration with other on-chip devices, FTM is suitable for diverse applications, such as point-of-care diagnostics, large-scale functional screens, and long-term automated imaging.
Liquid silicate equation of state : using shock waves to understand the properties of the deep Earth
Resumo:
The equations of state (EOS) of several geologically important silicate liquids have been constrained via preheated shock wave techniques. Results on molten Fe2SiO4 (fayalite), Mg2SiO4 (forsterite), CaFeSi2O6 (hedenbergite), an equimolar mixture of CaAl2Si2O8-CaFeSi2O6 (anorthite-hedenbergite), and an equimolar mixture of CaAl2Si2O8-CaFeSi2O6-CaMgSi2O6(anorthite-hedenbergite-diopside) are presented. This work represents the first ever direct EOS measurements of an iron-bearing liquid or of a forsterite liquid at pressures relevant to the deep Earth (> 135 GPa). Additionally, revised EOS for molten CaMgSi2O6 (diopside), CaAl2Si2O8 (anorthite), and MgSiO3 (enstatite), which were previously determined by shock wave methods, are also presented.
The liquid EOS are incorporated into a model, which employs linear mixing of volumes to determine the density of compositionally intermediate liquids in the CaO-MgO-Al2O3-SiO2-FeO major element space. Liquid volumes are calculated for temperature and pressure conditions that are currently present at the core-mantle boundary or that may have occurred during differentiation of a fully molten mantle magma ocean.
The most significant implications of our results include: (1) a magma ocean of either chondrite or peridotite composition is less dense than its first crystallizing solid, which is not conducive to the formation of a basal mantle magma ocean, (2) the ambient mantle cannot produce a partial melt and an equilibrium residue sufficiently dense to form an ultralow velocity zone mush, and (3) due to the compositional dependence of Fe
Resumo:
Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.
Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.
An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.
As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
Sources and effects of astrophysical gravitational radiation are explained briefly to motivate discussion of the Caltech 40 meter antenna, which employs laser interferometry to monitor proper distances between inertial test masses. Practical considerations in construction of the apparatus are described. Redesign of test mass systems has resulted in a reduction of noise from internal mass vibrations by up to two orders of magnitude at some frequencies. A laser frequency stabilization system was developed which corrects the frequency of an argon ion laser to a residual fluctuation level bounded by the spectral density √s_v(f) ≤ 60µHz/√Hz, at fluctuation frequencies near 1.2 kHz. These and other improvements have contributed to reducing the spectral density of equivalent gravitational wave strain noise to √s_h(f)≈10^(-19)/√ Hz at these frequencies.
Finally, observations made with the antenna in February and March of 1987 are described. Kilohertz-band gravitational waves produced by the remnant of the recent supernova are shown to be theoretically unlikely at the strength required for confident detection in this antenna (then operating at poorer sensitivity than that quoted above). A search for periodic waves in the recorded data, comprising Fourier analysis of four 105-second samples of the antenna strain signal, was used to place new upper limits on periodic gravitational radiation at frequencies between 305 Hz and 5 kHz. In particular, continuous waves of any polarization are ruled out above strain amplitudes of 1.2 x 10^(-18) R.M.S. for waves emanating from the direction of the supernova, and 6.2 x 10^(-19) R.M.S. for waves emanating from the galactic center, between 1.5 and 4 kilohertz. Between 305 Hz and 5kHz no strains greater than 1.2 x 10^(-17) R.M.S. were detected from either direction. Limitations of the analysis and potential improvements are discussed, as are prospects for future searches.
Resumo:
This work seeks to understand past and present surface conditions on the Moon using two different but complementary approaches: topographic analysis using high-resolution elevation data from recent spacecraft missions and forward modeling of the dominant agent of lunar surface modification, impact cratering. The first investigation focuses on global surface roughness of the Moon, using a variety of statistical parameters to explore slopes at different scales and their relation to competing geological processes. We find that highlands topography behaves as a nearly self-similar fractal system on scales of order 100 meters, and there is a distinct change in this behavior above and below approximately 1 km. Chapter 2 focuses this analysis on two localized regions: the lunar south pole, including Shackleton crater, and the large mare-filled basins on the nearside of the Moon. In particular, we find that differential slope, a statistical measure of roughness related to the curvature of a topographic profile, is extremely useful in distinguishing between geologic units. Chapter 3 introduces a numerical model that simulates a cratered terrain by emplacing features of characteristic shape geometrically, allowing for tracking of both the topography and surviving rim fragments over time. The power spectral density of cratered terrains is estimated numerically from model results and benchmarked against a 1-dimensional analytic model. The power spectral slope is observed to vary predictably with the size-frequency distribution of craters, as well as the crater shape. The final chapter employs the rim-tracking feature of the cratered terrain model to analyze the evolving size-frequency distribution of craters under different criteria for identifying "visible" craters from surviving rim fragments. A geometric bias exists that systematically over counts large or small craters, depending on the rim fraction required to count a given feature as either visible or erased.
Resumo:
In the 1994 Mw 6.7 Northridge and 1995 Mw 6.9 Kobe earthquakes, steel moment-frame buildings were exposed to an unexpected flaw. The commonly utilized welded unreinforced flange, bolted web connections were observed to experience brittle fractures in a number of buildings, even at low levels of seismic demand. A majority of these buildings have not been retrofitted and may be susceptible to structural collapse in a major earthquake.
This dissertation presents a case study of retrofitting a 20-story pre-Northridge steel moment-frame building. Twelve retrofit schemes are developed that present some range in degree of intervention. Three retrofitting techniques are considered: upgrading the brittle beam-to-column moment resisting connections, and implementing either conventional or buckling-restrained brace elements within the existing moment-frame bays. The retrofit schemes include some that are designed to the basic safety objective of ASCE-41 Seismic Rehabilitation of Existing Buildings.
Detailed finite element models of the base line building and the retrofit schemes are constructed. The models include considerations of brittle beam-to-column moment resisting connection fractures, column splice fractures, column baseplate fractures, accidental contributions from ``simple'' non-moment resisting beam-to-column connections to the lateral force-resisting system, and composite actions of beams with the overlying floor system. In addition, foundation interaction is included through nonlinear translational springs underneath basement columns.
To investigate the effectiveness of the retrofit schemes, the building models are analyzed under ground motions from three large magnitude simulated earthquakes that cause intense shaking in the greater Los Angeles metropolitan area, and under recorded ground motions from actual earthquakes. It is found that retrofit schemes that convert the existing moment-frames into braced-frames by implementing either conventional or buckling-restrained braces are effective in limiting structural damage and mitigating structural collapse. In the three simulated earthquakes, a 20% chance of simulated collapse is realized at PGV of around 0.6 m/s for the base line model, but at PGV of around 1.8 m/s for some of the retrofit schemes. However, conventional braces are observed to deteriorate rapidly. Hence, if a braced-frame that employs conventional braces survives a large earthquake, it is questionable how much service the braces provide in potential aftershocks.
Resumo:
High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.
Resumo:
We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.
Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.
The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.
In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.
Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.
Resumo:
The early stage of laminar-turbulent transition in a hypervelocity boundary layer is studied using a combination of modal linear stability analysis, transient growth analysis, and direct numerical simulation. Modal stability analysis is used to clarify the behavior of first and second mode instabilities on flat plates and sharp cones for a wide range of high enthalpy flow conditions relevant to experiments in impulse facilities. Vibrational nonequilibrium is included in this analysis, its influence on the stability properties is investigated, and simple models for predicting when it is important are described.
Transient growth analysis is used to determine the optimal initial conditions that lead to the largest possible energy amplification within the flow. Such analysis is performed for both spatially and temporally evolving disturbances. The analysis again targets flows that have large stagnation enthalpy, such as those found in shock tunnels, expansion tubes, and atmospheric flight at high Mach numbers, and clarifies the effects of Mach number and wall temperature on the amplification achieved. Direct comparisons between modal and non-modal growth are made to determine the relative importance of these mechanisms under different flow regimes.
Conventional stability analysis employs the assumption that disturbances evolve with either a fixed frequency (spatial analysis) or a fixed wavenumber (temporal analysis). Direct numerical simulations are employed to relax these assumptions and investigate the downstream propagation of wave packets that are localized in space and time, and hence contain a distribution of frequencies and wavenumbers. Such wave packets are commonly observed in experiments and hence their amplification is highly relevant to boundary layer transition prediction. It is demonstrated that such localized wave packets experience much less growth than is predicted by spatial stability analysis, and therefore it is essential that the bandwidth of localized noise sources that excite the instability be taken into account in making transition estimates. A simple model based on linear stability theory is also developed which yields comparable results with an enormous reduction in computational expense. This enables the amplification of finite-width wave packets to be taken into account in transition prediction.
Resumo:
Ordered granular systems have been a subject of active research for decades. Due to their rich dynamic response and nonlinearity, ordered granular systems have been suggested for several applications, such as solitary wave focusing, acoustic signals manipulation, and vibration absorption. Most of the fundamental research performed on ordered granular systems has focused on macro-scale examples. However, most engineering applications require these systems to operate at much smaller scales. Very little is known about the response of micro-scale granular systems, primarily because of the difficulties in realizing reliable and quantitative experiments, which originate from the discrete nature of granular materials and their highly nonlinear inter-particle contact forces.
In this work, we investigate the physics of ordered micro-granular systems by designing an innovative experimental platform that allows us to assemble, excite, and characterize ordered micro-granular systems. This new experimental platform employs a laser system to deliver impulses with controlled momentum and incorporates non-contact measurement apparatuses to detect the particles’ displacement and velocity. We demonstrated the capability of the laser system to excite systems of dry (stainless steel particles of radius 150 micrometers) and wet (silica particles of radius 3.69 micrometers, immersed in fluid) micro-particles, after which we analyzed the stress propagation through these systems.
We derived the equations of motion governing the dynamic response of dry and wet particles on a substrate, which we then validated in experiments. We then measured the losses in these systems and characterized the collision and friction between two micro-particles. We studied wave propagation in one-dimensional dry chains of micro-particles as well as in two-dimensional colloidal systems immersed in fluid. We investigated the influence of defects to wave propagation in the one-dimensional systems. Finally, we characterized the wave-attenuation and its relation to the viscosity of the surrounding fluid and performed computer simulations to establish a model that captures the observed response.
The findings of the study offer the first systematic experimental and numerical analysis of wave propagation through ordered systems of micro-particles. The experimental system designed in this work provides the necessary tools for further fundamental studies of wave propagation in both granular and colloidal systems.
Resumo:
The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.
The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.
Resumo:
The sun has the potential to power the Earth's total energy needs, but electricity from solar power still constitutes an extremely small fraction of our power generation because of its high cost relative to traditional energy sources. Therefore, the cost of solar must be reduced to realize a more sustainable future. This can be achieved by significantly increasing the efficiency of modules that convert solar radiation to electricity. In this thesis, we consider several strategies to improve the device and photonic design of solar modules to achieve record, ultrahigh (> 50%) solar module efficiencies. First, we investigate the potential of a new passivation treatment, trioctylphosphine sulfide, to increase the performance of small GaAs solar cells for cheaper and more durable modules. We show that small cells (mm2), which currently have a significant efficiency decrease (~ 5%) compared to larger cells (cm2) because small cells have a higher fraction of recombination-active surface from the sidewalls, can achieve significantly higher efficiencies with effective passivation of the sidewalls. We experimentally validate the passivation qualities of treatment by trioctylphosphine sulfide (TOP:S) through four independent studies and show that this facile treatment can enable efficient small devices. Then, we discuss our efforts toward the design and prototyping of a spectrum-splitting module that employs optical elements to divide the incident spectrum into different color bands, which allows for higher efficiencies than traditional methods. We present a design, the polyhedral specular reflector, that has the potential for > 50% module efficiencies even with realistic losses from combined optics, cell, and electrical models. Prototyping efforts of one of these designs using glass concentrators yields an optical module whose combined spectrum-splitting and concentration should correspond to a record module efficiency of 42%. Finally, we consider how the manipulation of radiatively emitted photons from subcells in multijunction architectures can be used to achieve even higher efficiencies than previously thought, inspiring both optimization of incident and radiatively emitted photons for future high efficiency designs. In this thesis work, we explore novel device and photonic designs that represent a significant departure from current solar cell manufacturing techniques and ultimately show the potential for much higher solar cell efficiencies.