3 resultados para Solomon, Alisa

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Future fossil fuel scarcity and environmental degradation have demonstrated the need for renewable, low-carbon sources of energy to power an increasingly industrialized world. Solar energy with its infinite supply makes it an extraordinary resource that should not go unused. However with current materials, adoption is limited by cost and so a paradigm shift must occur to get everyone on the same page embracing solar technology. Cuprous Oxide (Cu2O) is a promising earth abundant material that can be a great alternative to traditional thin-film photovoltaic materials like CIGS, CdTe, etc. We have prepared Cu2O bulk substrates by the thermal oxidation of copper foils as well Cu2O thin films deposited via plasma-assisted Molecular Beam Epitaxy. From preliminary Hall measurements it was determined that Cu2O would need to be doped extrinsically. This was further confirmed by simulations of ZnO/Cu2O heterojunctions. A cyclic interdependence between, defect concentration, minority carrier lifetime, film thickness, and carrier concentration manifests itself a primary reason for why efficiencies greater than 4% has yet to be realized. Our growth methodology for our thin-film heterostructures allow precise control of the number of defects that incorporate into our film during both equilibrium and nonequilibrium growth. We also report process flow/device design/fabrication techniques in order to create a device. A typical device without any optimizations exhibited open-circuit voltages Voc, values in excess 500mV; nearly 18% greater than previous solid state devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The isotopic composition of the enhanced low energy nitrogen and oxygen cosmic rays can provide information regarding the source of these particles. Using the Caltech Electron/Isotope Spectrometer aboard the IMP-7 satellite, a measurement of this isotopic composition was made. To determine the isotope response of the instrument, a calibration was performed, and it was determined that the standard range-energy tables were inadequate to calculate the isotope response. From the calibration, corrections to the standard range-energy tables were obtained which can be used to calculate the isotope response of this and similar instruments.

The low energy nitrogen and oxygen cosmic rays were determined to be primarily ^(14)N and ^(16)O. Upper limits were obtained for the abundances of the other stable nitrogen and oxygen isotopes. To the 84% confidence level the isotopic abundances are: ^(15)N/N ≤ 0.26 (5.6- 12.7 MeV/nucleon), ^(17)0/0 ≤ 0.13 (7.0- 11.8 MeV/nucleon), (18)0/0 ≤ 0.12 (7.0 - 11.2 MeV/nucleon). The nitrogen composition differs from higher energy measurements which indicate that ^(15)N, which is thought to be secondary, is the dominant isotope. This implies that the low energy enhanced cosmic rays are not part of the same population as the higher energy cosmic rays and that they have not passed through enough material to produce a large fraction of ^(15)N. The isotopic composition of the low energy enhanced nitrogen and oxygen is consistent with the local acceleration theory of Fisk, Kozlovsky, and Ramaty, in which interstellar material is accelerated to several MeV/nucleon. If, on the other hand, the low energy nitrogen and oxygen result from nucleosynthesis in a galactic source, then the nucleosynthesis processes which produce an enhancement of nitrogen and oxygen and a depletion of carbon are restricted to producing predominantly ^(14)N and ^(16)O.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.