979 resultados para Factorial experiment designs.
Resumo:
The solar resource is the most abundant renewable resource on earth, yet it is currently exploited with relatively low efficiencies. To make solar energy more affordable, we can either reduce the cost of the cell or increase the efficiency with a similar cost cell. In this thesis, we consider several different optical approaches to achieve these goals. First, we consider a ray optical model for light trapping in silicon microwires. With this approach, much less material can be used, allowing for a cost savings. We next focus on reducing the escape of radiatively emitted and scattered light from the solar cell. With this angle restriction approach, light can only enter and escape the cell near normal incidence, allowing for thinner cells and higher efficiencies. In Auger-limited GaAs, we find that efficiencies greater than 38% may be achievable, a significant improvement over the current world record. To experimentally validate these results, we use a Bragg stack to restrict the angles of emitted light. Our measurements show an increase in voltage and a decrease in dark current, as less radiatively emitted light escapes. While the results in GaAs are interesting as a proof of concept, GaAs solar cells are not currently made on the production scale for terrestrial photovoltaic applications. We therefore explore the application of angle restriction to silicon solar cells. While our calculations show that Auger-limited cells give efficiency increases of up to 3% absolute, we also find that current amorphous silicion-crystalline silicon heterojunction with intrinsic thin layer (HIT) cells give significant efficiency gains with angle restriction of up to 1% absolute. Thus, angle restriction has the potential for unprecedented one sun efficiencies in GaAs, but also may be applicable to current silicon solar cell technology. Finally, we consider spectrum splitting, where optics direct light in different wavelength bands to solar cells with band gaps tuned to those wavelengths. This approach has the potential for very high efficiencies, and excellent annual power production. Using a light-trapping filtered concentrator approach, we design filter elements and find an optimal design. Thus, this thesis explores silicon microwires, angle restriction, and spectral splitting as different optical approaches for improving the cost and efficiency of solar cells.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
31 p.
Resumo:
Bulk n-lnSb is investigated at a heterodyne detector for the submillimeter wavelength region. Two modes or operation are investigated: (1) the Rollin or hot electron bolometer mode (zero magnetic field), and (2) the Putley mode (quantizing magnetic field). The highlight of the thesis work is the pioneering demonstration or the Putley mode mixer at several frequencies. For example, a double-sideband system noise temperature of about 510K was obtained using a 812 GHz methanol laser for the local oscillator. This performance is at least a factor or 10 more sensitive than any other performance reported to date at the same frequency. In addition, the Putley mode mixer achieved system noise temperatures of 250K at 492 GHz and 350K at 625 GHz. The 492 GHz performance is about 50% better and the 625 GHz is about 100% better than previous best performances established by the Rollin-mode mixer. To achieve these results, it was necessary to design a totally new ultra-low noise, room-temperature preamp to handle the higher source impedance imposed by the Putley mode operation. This preamp has considerably less input capacitance than comparably noisy, ambient designs.
In addition to advancing receiver technology, this thesis also presents several novel results regarding the physics of n-lnSb at low temperatures. A Fourier transform spectrometer was constructed and used to measure the submillimeter wave absorption coefficient of relatively pure material at liquid helium temperatures and in zero magnetic field. Below 4.2K, the absorption coefficient was found to decrease with frequency much faster than predicted by Drudian theory. Much better agreement with experiment was obtained using a quantum theory based on inverse-Bremmstrahlung in a solid. Also the noise of the Rollin-mode detector at 4.2K was accurately measured and compared with theory. The power spectrum is found to be well fit by a recent theory of non- equilibrium noise due to Mather. Surprisingly, when biased for optimum detector performance, high purity lnSb cooled to liquid helium temperatures generates less noise than that predicted by simple non-equilibrium Johnson noise theory alone. This explains in part the excellent performance of the Rollin-mode detector in the millimeter wavelength region.
Again using the Fourier transform spectrometer, spectra are obtained of the responsivity and direct detection NEP as a function of magnetic field in the range 20-110 cm-1. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance frequency tor magnetic fields as low as 3 KG at bath temperatures of 2.0K. The spectra also display the well-known peak due to the cyclotron resonance of electrons bound to impurity states. The magnitude of responsivity at both peaks is roughly constant with magnet1c field and is comparable to the low frequency Rollin-mode response. The NEP at the peaks is found to be much better than previous values at the same frequency and comparable to the best long wavelength results previously reported. For example, a value NEP=4.5x10-13/Hz1/2 is measured at 4.2K, 6 KG and 40 cm-1. Study of the responsivity under conditions of impact ionization showed a dramatic disappearance of the impurity electron resonance while the conduction electron resonance remained constant. This observation offers the first concrete evidence that the mobility of an electron in the N=0 and N=1 Landau levels is different. Finally, these direct detection experiments indicate that the excellent heterodyne performance achieved at 812 GHz should be attainable up to frequencies of at least 1200 GHz.
Resumo:
The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.
Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.
Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.
When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.
The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.
Resumo:
Over the last several decades there have been significant advances in the study and understanding of light behavior in nanoscale geometries. Entire fields such as those based on photonic crystals, plasmonics and metamaterials have been developed, accelerating the growth of knowledge related to nanoscale light manipulation. Coupled with recent interest in cheap, reliable renewable energy, a new field has blossomed, that of nanophotonic solar cells.
In this thesis, we examine important properties of thin-film solar cells from a nanophotonics perspective. We identify key differences between nanophotonic devices and traditional, thick solar cells. We propose a new way of understanding and describing limits to light trapping and show that certain nanophotonic solar cell designs can have light trapping limits above the so called ray-optic or ergodic limit. We propose that a necessary requisite to exceed the traditional light trapping limit is that the active region of the solar cell must possess a local density of optical states (LDOS) higher than that of the corresponding, bulk material. Additionally, we show that in addition to having an increased density of states, the absorber must have an appropriate incoupling mechanism to transfer light from free space into the optical modes of the device. We outline a portfolio of new solar cell designs that have potential to exceed the traditional light trapping limit and numerically validate our predictions for select cases.
We emphasize the importance of thinking about light trapping in terms of maximizing the optical modes of the device and efficiently coupling light into them from free space. To further explore these two concepts, we optimize patterns of superlattices of air holes in thin slabs of Si and show that by adding a roughened incoupling layer the total absorbed current can be increased synergistically. We suggest that the addition of a random scattering surface to a periodic patterning can increase incoupling by lifting the constraint of selective mode occupation associated with periodic systems.
Lastly, through experiment and simulation, we investigate a potential high efficiency solar cell architecture that can be improved with the nanophotonic light trapping concepts described in this thesis. Optically thin GaAs solar cells are prepared by the epitaxial liftoff process by removal from their growth substrate and addition of a metallic back reflector. A process of depositing large area nano patterns on the surface of the cells is developed using nano imprint lithography and implemented on the thin GaAs cells.
Resumo:
This thesis describes investigations of two classes of laboratory plasmas with rather different properties: partially ionized low pressure radiofrequency (RF) discharges, and fully ionized high density magnetohydrodynamically (MHD)-driven jets. An RF pre-ionization system was developed to enable neutral gas breakdown at lower pressures and create hotter, faster jets in the Caltech MHD-Driven Jet Experiment. The RF plasma source used a custom pulsed 3 kW 13.56 MHz RF power amplifier that was powered by AA batteries, allowing it to safely float at 4-6 kV with the cathode of the jet experiment. The argon RF discharge equilibrium and transport properties were analyzed, and novel jet dynamics were observed.
Although the RF plasma source was conceived as a wave-heated helicon source, scaling measurements and numerical modeling showed that inductive coupling was the dominant energy input mechanism. A one-dimensional time-dependent fluid model was developed to quantitatively explain the expansion of the pre-ionized plasma into the jet experiment chamber. The plasma transitioned from an ionizing phase with depressed neutral emission to a recombining phase with enhanced emission during the course of the experiment, causing fast camera images to be a poor indicator of the density distribution. Under certain conditions, the total visible and infrared brightness and the downstream ion density both increased after the RF power was turned off. The time-dependent emission patterns were used for an indirect measurement of the neutral gas pressure.
The low-mass jets formed with the aid of the pre-ionization system were extremely narrow and collimated near the electrodes, with peak density exceeding that of jets created without pre-ionization. The initial neutral gas distribution prior to plasma breakdown was found to be critical in determining the ultimate jet structure. The visible radius of the dense central jet column was several times narrower than the axial current channel radius, suggesting that the outer portion of the jet must have been force free, with the current parallel to the magnetic field. The studies of non-equilibrium flows and plasma self-organization being carried out at Caltech are relevant to astrophysical jets and fusion energy research.
Resumo:
We carried out quantum mechanics (QM) studies aimed at improving the performance of hydrogen fuel cells. This led to predictions of improved materials, some of which were subsequently validated with experiments by our collaborators.
In part I, the challenge was to find a replacement for the Pt cathode that would lead to improved performance for the Oxygen Reduction Reaction (ORR) while remaining stable under operational conditions and decreasing cost. Our design strategy was to find an alloy with composition Pt3M that would lead to surface segregation such that the top layer would be pure Pt, with the second and subsequent layers richer in M. Under operating conditions we expect the surface to have significant O and/or OH chemisorbed on the surface, and hence we searched for M that would remain segregated under these conditions. Using QM we examined surface segregation for 28 Pt3M alloys, where M is a transition metal. We found that only Pt3Os and Pt3Ir showed significant surface segregation when O and OH are chemisorbed on the catalyst surfaces. This result indicates that Pt3Os and Pt3Ir favor formation of a Pt-skin surface layer structure that would resist the acidic electrolyte corrosion during fuel cell operation environments. We chose to focus on Os because the phase diagram for Pt-Ir indicated that Pt-Ir could not form a homogeneous alloy at lower temperature. To determine the performance for ORR, we used QM to examine all intermediates, reaction pathways, and reaction barriers involved in the processes for which protons from the anode reactions react with O2 to form H2O. These QM calculations used our Poisson-Boltzmann implicit solvation model include the effects of the solvent (water with dielectric constant 78 with pH 7 at 298K). We found that the rate determination step (RDS) was the Oad hydration reaction (Oad + H2Oad -> OHad + OHad) in both cases, but that the barrier for pure Pt of 0.50 eV is reduced to 0.48 eV for Pt3Os, which at 80 degrees C would increase the rate by 218%. We collaborated with the Pu-Wei Wu’s group to carry out experiments, where we found that the dealloying process-treated Pt2Os catalyst showed two-fold higher activity at 25 degrees C than pure Pt and that the alloy had 272% improved stability, validating our theoretical predictions.
We also carried out similar QM studies followed by experimental validation for the Os/Pt core-shell catalyst fabricated by the underpotential deposition (UPD) method. The QM results indicated that the RDS for ORR is a compromise between the OOH formation step (0.37 eV for Pt, 0.23 eV for Pt2ML/Os core-shell) and H2O formation steps (0.32 eV for Pt, 0.22 eV for Pt2ML/Os core-shell). We found that Pt2ML/Os has the highest activity (compared to pure Pt and to the Pt3Os alloy) because the 0.37 eV barrier decreases to 0.23 eV. To understand what aspects of the core shell structure lead to this improved performance, we considered the effect on ORR of compressing the alloy slab to the dimensions of pure Pt. However this had little effect, with the same RDS barrier 0.37 eV. This shows that the ligand effect (the electronic structure modification resulting from the Os substrate) plays a more important role than the strain effect, and is responsible for the improved activity of the core- shell catalyst. Experimental materials characterization proves the core-shell feature of our catalyst. The electrochemical experiment for Pt2ML/Os/C showed 3.5 to 5 times better ORR activity at 0.9V (vs. NHE) in 0.1M HClO4 solution at 25 degrees C as compared to those of commercially available Pt/C. The excellent correlation between experimental half potential and the OH binding energies and RDS barriers validate the feasibility of predicting catalyst activity using QM calculation and a simple Langmuir–Hinshelwood model.
In part II, we used QM calculations to study methane stream reforming on a Ni-alloy catalyst surfaces for solid oxide fuel cell (SOFC) application. SOFC has wide fuel adaptability but the coking and sulfur poisoning will reduce its stability. Experimental results suggested that the Ni4Fe alloy improves both its activity and stability compared to pure Ni. To understand the atomistic origin of this, we carried out QM calculations on surface segregation and found that the most stable configuration for Ni4Fe has a Fe atom distribution of (0%, 50%, 25%, 25%, 0%) starting at the bottom layer. We calculated that the binding of C atoms on the Ni4Fe surface is 142.9 Kcal/mol, which is about 10 Kcal/mol weaker compared to the pure Ni surface. This weaker C binding energy is expected to make coke formation less favorable, explaining why Ni4Fe has better coking resistance. This result confirms the experimental observation. The reaction energy barriers for CHx decomposition and C binding on various alloy surface, Ni4X (X=Fe, Co, Mn, and Mo), showed Ni4Fe, Ni4Co, and Fe4Mn all have better coking resistance than pure Ni, but that only Ni4Fe and Fe4Mn have (slightly) improved activity compared to pure Ni.
In part III, we used QM to examine the proton transport in doped perovskite-ceramics. Here we used a 2x2x2 supercell of perovskite with composition Ba8X7M1(OH)1O23 where X=Ce or Zr and M=Y, Gd, or Dy. Thus in each case a 4+ X is replace by a 3+ M plus a proton on one O. Here we predicted the barriers for proton diffusion allowing both includes intra-octahedron and inter-octahedra proton transfer. Without any restriction, we only observed the inter-octahedra proton transfer with similar energy barrier as previous computational work but 0.2 eV higher than experimental result for Y doped zirconate. For one restriction in our calculations is that the Odonor-Oacceptor atoms were kept at fixed distances, we found that the barrier difference between cerates/zirconates with various dopants are only 0.02~0.03 eV. To fully address performance one would need to examine proton transfer at grain boundaries, which will require larger scale ReaxFF reactive dynamics for systems with millions of atoms. The QM calculations used here will be used to train the ReaxFF force field.
Resumo:
The authors have demonstrated the principle of a novel optical multichannel-scale range-tunable Fourier-transforming system. The experimental results show good agreement with the theoretical analysis.
Resumo:
Combinatorial configurations known as t-designs are studied. These are pairs ˂B, ∏˃, where each element of B is a k-subset of ∏, and each t-design occurs in exactly λ elements of B, for some fixed integers k and λ. A theory of internal structure of t-designs is developed, and it is shown that any t-design can be decomposed in a natural fashion into a sequence of “simple” subdesigns. The theory is quite similar to the analysis of a group with respect to its normal subgroups, quotient groups, and homomorphisms. The analogous concepts of normal subdesigns, quotient designs, and design homomorphisms are all defined and used.
This structure theory is then applied to the class of t-designs whose automorphism groups are transitive on sets of t points. It is shown that if G is a permutation group transitive on sets of t letters and ф is any set of letters, then images of ф under G form a t-design whose parameters may be calculated from the group G. Such groups are discussed, especially for the case t = 2, and the normal structure of such designs is considered. Theorem 2.2.12 gives necessary and sufficient conditions for a t-design to be simple, purely in terms of the automorphism group of the design. Some constructions are given.
Finally, 2-designs with k = 3 and λ = 2 are considered in detail. These designs are first considered in general, with examples illustrating some of the configurations which can arise. Then an attempt is made to classify all such designs with an automorphism group transitive on pairs of points. Many cases are eliminated of reduced to combinations of Steiner triple systems. In the remaining cases, the simple designs are determined to consist of one infinite class and one exceptional case.