13 resultados para Factorial experiment designs
em CaltechTHESIS
Resumo:
A novel spectroscopy of trapped ions is proposed which will bring single-ion detection sensitivity to the observation of magnetic resonance spectra. The approaches developed here are aimed at resolving one of the fundamental problems of molecular spectroscopy, the apparent incompatibility in existing techniques between high information content (and therefore good species discrimination) and high sensitivity. Methods for studying both electron spin resonance (ESR) and nuclear magnetic resonance (NMR) are designed. They assume established methods for trapping ions in high magnetic field and observing the trapping frequencies with high resolution (<1 Hz) and sensitivity (single ion) by electrical means. The introduction of a magnetic bottle field gradient couples the spin and spatial motions together and leads to a small spin-dependent force on the ion, which has been exploited by Dehmelt to observe directly the perturbation of the ground-state electron's axial frequency by its spin magnetic moment.
A series of fundamental innovations is described m order to extend magnetic resonance to the higher masses of molecular ions (100 amu = 2x 10^5 electron masses) and smaller magnetic moments (nuclear moments = 10^(-3) of the electron moment). First, it is demonstrated how time-domain trapping frequency observations before and after magnetic resonance can be used to make cooling of the particle to its ground state unnecessary. Second, adiabatic cycling of the magnetic bottle off between detection periods is shown to be practical and to allow high-resolution magnetic resonance to be encoded pointwise as the presence or absence of trapping frequency shifts. Third, methods of inducing spindependent work on the ion orbits with magnetic field gradients and Larmor frequency irradiation are proposed which greatly amplify the attainable shifts in trapping frequency.
The dissertation explores the basic concepts behind ion trapping, adopting a variety of classical, semiclassical, numerical, and quantum mechanical approaches to derive spin-dependent effects, design experimental sequences, and corroborate results from one approach with those from another. The first proposal presented builds on Dehmelt's experiment by combining a "before and after" detection sequence with novel signal processing to reveal ESR spectra. A more powerful technique for ESR is then designed which uses axially synchronized spin transitions to perform spin-dependent work in the presence of a magnetic bottle, which also converts axial amplitude changes into cyclotron frequency shifts. A third use of the magnetic bottle is to selectively trap ions with small initial kinetic energy. A dechirping algorithm corrects for undesired frequency shifts associated with damping by the measurement process.
The most general approach presented is spin-locked internally resonant ion cyclotron excitation, a true continuous Stern-Gerlach effect. A magnetic field gradient modulated at both the Larmor and cyclotron frequencies is devised which leads to cyclotron acceleration proportional to the transverse magnetic moment of a coherent state of the particle and radiation field. A preferred method of using this to observe NMR as an axial frequency shift is described in detail. In the course of this derivation, a new quantum mechanical description of ion cyclotron resonance is presented which is easily combined with spin degrees of freedom to provide a full description of the proposals.
Practical, technical, and experimental issues surrounding the feasibility of the proposals are addressed throughout the dissertation. Numerical ion trajectory simulations and analytical models are used to predict the effectiveness of the new designs as well as their sensitivity and resolution. These checks on the methods proposed provide convincing evidence of their promise in extending the wealth of magnetic resonance information to the study of collisionless ions via single-ion spectroscopy.
Resumo:
Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.
This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.
One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.
One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.
Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.
As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.
Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.
We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.
We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.
In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The solar resource is the most abundant renewable resource on earth, yet it is currently exploited with relatively low efficiencies. To make solar energy more affordable, we can either reduce the cost of the cell or increase the efficiency with a similar cost cell. In this thesis, we consider several different optical approaches to achieve these goals. First, we consider a ray optical model for light trapping in silicon microwires. With this approach, much less material can be used, allowing for a cost savings. We next focus on reducing the escape of radiatively emitted and scattered light from the solar cell. With this angle restriction approach, light can only enter and escape the cell near normal incidence, allowing for thinner cells and higher efficiencies. In Auger-limited GaAs, we find that efficiencies greater than 38% may be achievable, a significant improvement over the current world record. To experimentally validate these results, we use a Bragg stack to restrict the angles of emitted light. Our measurements show an increase in voltage and a decrease in dark current, as less radiatively emitted light escapes. While the results in GaAs are interesting as a proof of concept, GaAs solar cells are not currently made on the production scale for terrestrial photovoltaic applications. We therefore explore the application of angle restriction to silicon solar cells. While our calculations show that Auger-limited cells give efficiency increases of up to 3% absolute, we also find that current amorphous silicion-crystalline silicon heterojunction with intrinsic thin layer (HIT) cells give significant efficiency gains with angle restriction of up to 1% absolute. Thus, angle restriction has the potential for unprecedented one sun efficiencies in GaAs, but also may be applicable to current silicon solar cell technology. Finally, we consider spectrum splitting, where optics direct light in different wavelength bands to solar cells with band gaps tuned to those wavelengths. This approach has the potential for very high efficiencies, and excellent annual power production. Using a light-trapping filtered concentrator approach, we design filter elements and find an optimal design. Thus, this thesis explores silicon microwires, angle restriction, and spectral splitting as different optical approaches for improving the cost and efficiency of solar cells.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Bulk n-lnSb is investigated at a heterodyne detector for the submillimeter wavelength region. Two modes or operation are investigated: (1) the Rollin or hot electron bolometer mode (zero magnetic field), and (2) the Putley mode (quantizing magnetic field). The highlight of the thesis work is the pioneering demonstration or the Putley mode mixer at several frequencies. For example, a double-sideband system noise temperature of about 510K was obtained using a 812 GHz methanol laser for the local oscillator. This performance is at least a factor or 10 more sensitive than any other performance reported to date at the same frequency. In addition, the Putley mode mixer achieved system noise temperatures of 250K at 492 GHz and 350K at 625 GHz. The 492 GHz performance is about 50% better and the 625 GHz is about 100% better than previous best performances established by the Rollin-mode mixer. To achieve these results, it was necessary to design a totally new ultra-low noise, room-temperature preamp to handle the higher source impedance imposed by the Putley mode operation. This preamp has considerably less input capacitance than comparably noisy, ambient designs.
In addition to advancing receiver technology, this thesis also presents several novel results regarding the physics of n-lnSb at low temperatures. A Fourier transform spectrometer was constructed and used to measure the submillimeter wave absorption coefficient of relatively pure material at liquid helium temperatures and in zero magnetic field. Below 4.2K, the absorption coefficient was found to decrease with frequency much faster than predicted by Drudian theory. Much better agreement with experiment was obtained using a quantum theory based on inverse-Bremmstrahlung in a solid. Also the noise of the Rollin-mode detector at 4.2K was accurately measured and compared with theory. The power spectrum is found to be well fit by a recent theory of non- equilibrium noise due to Mather. Surprisingly, when biased for optimum detector performance, high purity lnSb cooled to liquid helium temperatures generates less noise than that predicted by simple non-equilibrium Johnson noise theory alone. This explains in part the excellent performance of the Rollin-mode detector in the millimeter wavelength region.
Again using the Fourier transform spectrometer, spectra are obtained of the responsivity and direct detection NEP as a function of magnetic field in the range 20-110 cm-1. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance frequency tor magnetic fields as low as 3 KG at bath temperatures of 2.0K. The spectra also display the well-known peak due to the cyclotron resonance of electrons bound to impurity states. The magnitude of responsivity at both peaks is roughly constant with magnet1c field and is comparable to the low frequency Rollin-mode response. The NEP at the peaks is found to be much better than previous values at the same frequency and comparable to the best long wavelength results previously reported. For example, a value NEP=4.5x10-13/Hz1/2 is measured at 4.2K, 6 KG and 40 cm-1. Study of the responsivity under conditions of impact ionization showed a dramatic disappearance of the impurity electron resonance while the conduction electron resonance remained constant. This observation offers the first concrete evidence that the mobility of an electron in the N=0 and N=1 Landau levels is different. Finally, these direct detection experiments indicate that the excellent heterodyne performance achieved at 812 GHz should be attainable up to frequencies of at least 1200 GHz.
Resumo:
The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.
Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.
Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.
When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.
The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.
Resumo:
Over the last several decades there have been significant advances in the study and understanding of light behavior in nanoscale geometries. Entire fields such as those based on photonic crystals, plasmonics and metamaterials have been developed, accelerating the growth of knowledge related to nanoscale light manipulation. Coupled with recent interest in cheap, reliable renewable energy, a new field has blossomed, that of nanophotonic solar cells.
In this thesis, we examine important properties of thin-film solar cells from a nanophotonics perspective. We identify key differences between nanophotonic devices and traditional, thick solar cells. We propose a new way of understanding and describing limits to light trapping and show that certain nanophotonic solar cell designs can have light trapping limits above the so called ray-optic or ergodic limit. We propose that a necessary requisite to exceed the traditional light trapping limit is that the active region of the solar cell must possess a local density of optical states (LDOS) higher than that of the corresponding, bulk material. Additionally, we show that in addition to having an increased density of states, the absorber must have an appropriate incoupling mechanism to transfer light from free space into the optical modes of the device. We outline a portfolio of new solar cell designs that have potential to exceed the traditional light trapping limit and numerically validate our predictions for select cases.
We emphasize the importance of thinking about light trapping in terms of maximizing the optical modes of the device and efficiently coupling light into them from free space. To further explore these two concepts, we optimize patterns of superlattices of air holes in thin slabs of Si and show that by adding a roughened incoupling layer the total absorbed current can be increased synergistically. We suggest that the addition of a random scattering surface to a periodic patterning can increase incoupling by lifting the constraint of selective mode occupation associated with periodic systems.
Lastly, through experiment and simulation, we investigate a potential high efficiency solar cell architecture that can be improved with the nanophotonic light trapping concepts described in this thesis. Optically thin GaAs solar cells are prepared by the epitaxial liftoff process by removal from their growth substrate and addition of a metallic back reflector. A process of depositing large area nano patterns on the surface of the cells is developed using nano imprint lithography and implemented on the thin GaAs cells.
Resumo:
This thesis describes investigations of two classes of laboratory plasmas with rather different properties: partially ionized low pressure radiofrequency (RF) discharges, and fully ionized high density magnetohydrodynamically (MHD)-driven jets. An RF pre-ionization system was developed to enable neutral gas breakdown at lower pressures and create hotter, faster jets in the Caltech MHD-Driven Jet Experiment. The RF plasma source used a custom pulsed 3 kW 13.56 MHz RF power amplifier that was powered by AA batteries, allowing it to safely float at 4-6 kV with the cathode of the jet experiment. The argon RF discharge equilibrium and transport properties were analyzed, and novel jet dynamics were observed.
Although the RF plasma source was conceived as a wave-heated helicon source, scaling measurements and numerical modeling showed that inductive coupling was the dominant energy input mechanism. A one-dimensional time-dependent fluid model was developed to quantitatively explain the expansion of the pre-ionized plasma into the jet experiment chamber. The plasma transitioned from an ionizing phase with depressed neutral emission to a recombining phase with enhanced emission during the course of the experiment, causing fast camera images to be a poor indicator of the density distribution. Under certain conditions, the total visible and infrared brightness and the downstream ion density both increased after the RF power was turned off. The time-dependent emission patterns were used for an indirect measurement of the neutral gas pressure.
The low-mass jets formed with the aid of the pre-ionization system were extremely narrow and collimated near the electrodes, with peak density exceeding that of jets created without pre-ionization. The initial neutral gas distribution prior to plasma breakdown was found to be critical in determining the ultimate jet structure. The visible radius of the dense central jet column was several times narrower than the axial current channel radius, suggesting that the outer portion of the jet must have been force free, with the current parallel to the magnetic field. The studies of non-equilibrium flows and plasma self-organization being carried out at Caltech are relevant to astrophysical jets and fusion energy research.
Resumo:
We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.
In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.
We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.
In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.
In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.
Resumo:
Combinatorial configurations known as t-designs are studied. These are pairs ˂B, ∏˃, where each element of B is a k-subset of ∏, and each t-design occurs in exactly λ elements of B, for some fixed integers k and λ. A theory of internal structure of t-designs is developed, and it is shown that any t-design can be decomposed in a natural fashion into a sequence of “simple” subdesigns. The theory is quite similar to the analysis of a group with respect to its normal subgroups, quotient groups, and homomorphisms. The analogous concepts of normal subdesigns, quotient designs, and design homomorphisms are all defined and used.
This structure theory is then applied to the class of t-designs whose automorphism groups are transitive on sets of t points. It is shown that if G is a permutation group transitive on sets of t letters and ф is any set of letters, then images of ф under G form a t-design whose parameters may be calculated from the group G. Such groups are discussed, especially for the case t = 2, and the normal structure of such designs is considered. Theorem 2.2.12 gives necessary and sufficient conditions for a t-design to be simple, purely in terms of the automorphism group of the design. Some constructions are given.
Finally, 2-designs with k = 3 and λ = 2 are considered in detail. These designs are first considered in general, with examples illustrating some of the configurations which can arise. Then an attempt is made to classify all such designs with an automorphism group transitive on pairs of points. Many cases are eliminated of reduced to combinations of Steiner triple systems. In the remaining cases, the simple designs are determined to consist of one infinite class and one exceptional case.
Resumo:
This thesis deals with two problems. The first is the determination of λ-designs, combinatorial configurations which are essentially symmetric block designs with the condition that each subset be of the same cardinality negated. We construct an infinite family of such designs from symmetric block designs and obtain some basic results about their structure. These results enable us to solve the problem for λ = 3 and λ = 4. The second problem deals with configurations related to both λ -designs and (ѵ, k, λ)-configurations. We have (n-1) k-subsets of {1, 2, ..., n}, S1, ..., Sn-1 such that Si ∩ Sj is a λ-set for i ≠ j. We obtain specifically the replication numbers of such a design in terms of n, k, and λ with one exceptional class which we determine explicitly. In certain special cases we settle the problem entirely.
Resumo:
Part 1. Many interesting visual and mechanical phenomena occur in the critical region of fluids, both for the gas-liquid and liquid-liquid transitions. The precise thermodynamic and transport behavior here has some broad consequences for the molecular theory of liquids. Previous studies in this laboratory on a liquid-liquid critical mixture via ultrasonics supported a basically classical analysis of fluid behavior by M. Fixman (e. g., the free energy is assumed analytic in intensive variables in the thermodynamics)--at least when the fluid is not too close to critical. A breakdown in classical concepts is evidenced close to critical, in some well-defined ways. We have studied herein a liquid-liquid critical system of complementary nature (possessing a lower critical mixing or consolute temperature) to all previous mixtures, to look for new qualitative critical behavior. We did not find such new behavior in the ultrasonic absorption ascribable to the critical fluctuations, but we did find extra absorption due to chemical processes (yet these are related to the mixing behavior generating the lower consolute point). We rederived, corrected, and extended Fixman's analysis to interpret our experimental results in these more complex circumstances. The entire account of theory and experiment is prefaced by an extensive introduction recounting the general status of liquid state theory. The introduction provides a context for our present work, and also points out problems deserving attention. Interest in these problems was stimulated by this work but also by work in Part 3.
Part 2. Among variational theories of electronic structure, the Hartree-Fock theory has proved particularly valuable for a practical understanding of such properties as chemical binding, electric multipole moments, and X-ray scattering intensity. It also provides the most tractable method of calculating first-order properties under external or internal one-electron perturbations, either developed explicitly in orders of perturbation theory or in the fully self-consistent method. The accuracy and consistency of first-order properties are poorer than those of zero-order properties, but this is most often due to the use of explicit approximations in solving the perturbed equations, or to inadequacy of the variational basis in size or composition. We have calculated the electric polarizabilities of H2, He, Li, Be, LiH, and N2 by Hartree-Fock theory, using exact perturbation theory or the fully self-consistent method, as dictated by convenience. By careful studies on total basis set composition, we obtained good approximations to limiting Hartree-Fock values of polarizabilities with bases of reasonable size. The values for all species, and for each direction in the molecular cases, are within 8% of experiment, or of best theoretical values in the absence of the former. Our results support the use of unadorned Hartree-Pock theory for static polarizabilities needed in interpreting electron-molecule scattering data, collision-induced light scattering experiments, and other phenomena involving experimentally inaccessible polarizabilities.
Part 3. Numerical integration of the close-coupled scattering equations has been carried out to obtain vibrational transition probabilities for some models of the electronically adiabatic H2-H2 collision. All the models use a Lennard-Jones interaction potential between nearest atoms in the collision partners. We have analyzed the results for some insight into the vibrational excitation process in its dependence on the energy of collision, the nature of the vibrational binding potential, and other factors. We conclude also that replacement of earlier, simpler models of the interaction potential by the Lennard-Jones form adds very little realism for all the complication it introduces. A brief introduction precedes the presentation of our work and places it in the context of attempts to understand the collisional activation process in chemical reactions as well as some other chemical dynamics.