5 resultados para codici lineari codici ciclici codice Reed-Solomon basi di Gröbner soluzione chiave

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper encoding of transmitted information can improve the performance of a communication system. To recover the information at the receiver it is necessary to decode the received signal. For many codes the complexity and slowness of the decoder is so severe that the code is not feasible for practical use. This thesis considers the decoding problem for one such class of codes, the comma-free codes related to the first-order Reed-Muller codes.

A factorization of the code matrix is found which leads to a simple, fast, minimum memory, decoder. The decoder is modular and only n modules are needed to decode a code of length 2n. The relevant factorization is extended to any code defined by a sequence of Kronecker products.

The problem of monitoring the correct synchronization position is also considered. A general answer seems to depend upon more detailed knowledge of the structure of comma-free codes. However, a technique is presented which gives useful results in many specific cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Future fossil fuel scarcity and environmental degradation have demonstrated the need for renewable, low-carbon sources of energy to power an increasingly industrialized world. Solar energy with its infinite supply makes it an extraordinary resource that should not go unused. However with current materials, adoption is limited by cost and so a paradigm shift must occur to get everyone on the same page embracing solar technology. Cuprous Oxide (Cu2O) is a promising earth abundant material that can be a great alternative to traditional thin-film photovoltaic materials like CIGS, CdTe, etc. We have prepared Cu2O bulk substrates by the thermal oxidation of copper foils as well Cu2O thin films deposited via plasma-assisted Molecular Beam Epitaxy. From preliminary Hall measurements it was determined that Cu2O would need to be doped extrinsically. This was further confirmed by simulations of ZnO/Cu2O heterojunctions. A cyclic interdependence between, defect concentration, minority carrier lifetime, film thickness, and carrier concentration manifests itself a primary reason for why efficiencies greater than 4% has yet to be realized. Our growth methodology for our thin-film heterostructures allow precise control of the number of defects that incorporate into our film during both equilibrium and nonequilibrium growth. We also report process flow/device design/fabrication techniques in order to create a device. A typical device without any optimizations exhibited open-circuit voltages Voc, values in excess 500mV; nearly 18% greater than previous solid state devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I.

In recent years, backscattering spectrometry has become an important tool for the analysis of thin films. An inherent limitation, though, is the loss of depth resolution due to energy straggling of the beam. To investigate this, energy straggling of 4He has been measured in thin films of Ni, Al, Au and Pt. Straggling is roughly proportional to square root of thickness, appears to have a slight energy dependence and generally decreases with decreasing atomic number of the adsorber. The results are compared with predictions of theory and with previous measurements. While Ni measurements are in fair agreement with Bohr's theory, Al measurements are 30% above and Au measurements are 40% below predicted values. The Au and Pt measurements give straggling values which are close to one another.

Part II.

MeV backscattering spectrometry and X-ray diffraction are used to investigate the behavior of sputter-deposited Ti-W mixed films on Si substrates. During vacuum anneals at temperatures near 700°C for several hours, the metallization layer reacts with the substrate. Backscattering analysis shows that the resulting compound layer is uniform in composition and contains Ti, Wand Si. The Ti:W ratio in the compound corresponds to that of the deposited metal film. X-ray analyses with Reed and Guinier cameras reveal the presence of the ternary TixW(1-x)Si2 compound. Its composition is unaffected by oxygen contamination during annealing, but the reaction rate is affected. The rate measured on samples with about 15% oxygen contamination after annealing is linear, of the order of 0.5 Å per second at 725°C, and depends on the crystallographic orientation of the substrate and the dc bias during sputter-deposition of the Ti-W film.

Au layers of about 1000 Å thickness were deposited onto unreacted Ti-W films on Si. When annealed at 400°C these samples underwent a color change,and SEM micrographs of the samples showed that an intricate pattern of fissures which were typically 3µm wide had evolved. Analysis by electron microprobe revealed that Au had segregated preferentially into the fissures. This result suggests that Ti-W is not a barrier to Au-Si intermixing at 400°C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The isotopic composition of the enhanced low energy nitrogen and oxygen cosmic rays can provide information regarding the source of these particles. Using the Caltech Electron/Isotope Spectrometer aboard the IMP-7 satellite, a measurement of this isotopic composition was made. To determine the isotope response of the instrument, a calibration was performed, and it was determined that the standard range-energy tables were inadequate to calculate the isotope response. From the calibration, corrections to the standard range-energy tables were obtained which can be used to calculate the isotope response of this and similar instruments.

The low energy nitrogen and oxygen cosmic rays were determined to be primarily ^(14)N and ^(16)O. Upper limits were obtained for the abundances of the other stable nitrogen and oxygen isotopes. To the 84% confidence level the isotopic abundances are: ^(15)N/N ≤ 0.26 (5.6- 12.7 MeV/nucleon), ^(17)0/0 ≤ 0.13 (7.0- 11.8 MeV/nucleon), (18)0/0 ≤ 0.12 (7.0 - 11.2 MeV/nucleon). The nitrogen composition differs from higher energy measurements which indicate that ^(15)N, which is thought to be secondary, is the dominant isotope. This implies that the low energy enhanced cosmic rays are not part of the same population as the higher energy cosmic rays and that they have not passed through enough material to produce a large fraction of ^(15)N. The isotopic composition of the low energy enhanced nitrogen and oxygen is consistent with the local acceleration theory of Fisk, Kozlovsky, and Ramaty, in which interstellar material is accelerated to several MeV/nucleon. If, on the other hand, the low energy nitrogen and oxygen result from nucleosynthesis in a galactic source, then the nucleosynthesis processes which produce an enhancement of nitrogen and oxygen and a depletion of carbon are restricted to producing predominantly ^(14)N and ^(16)O.