8 resultados para Renilla reniformis luciferase vectors

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We classify the genuine ordinary mod p representations of the metaplectic group SL(2,F)-tilde, where F is a p-adic field, and compute its genuine mod p spherical and Iwahori Hecke algebras. The motivation is an interest in a possible correspondence between genuine mod p representations of SL(2,F)-tilde and mod p representations of the dual group PGL(2,F), so we also compare the two Hecke algebras to the mod p spherical and Iwahori Hecke algebras of PGL(2,F). We show that the genuine mod p spherical Hecke algebra of SL(2,F)-tilde is isomorphic to the mod p spherical Hecke algebra of PGL(2,F), and that one can choose an isomorphism which is compatible with a natural, though partial, correspondence of unramified ordinary representations via the Hecke action on their spherical vectors. We then show that the genuine mod p Iwahori Hecke algebra of SL(2,F)-tilde is a subquotient of the mod p Iwahori Hecke algebra of PGL(2,F), but that the two algebras are not isomorphic. This is in contrast to the situation in characteristic 0, where by work of Savin one can recover the local Shimura correspondence for representations generated by their Iwahori fixed vectors from an isomorphism of Iwahori Hecke algebras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis I apply paleomagnetic techniques to paleoseismological problems. I investigate the use of secular-variation magnetostratigraphy to date prehistoric earthquakes; I identify liquefaction remanent magnetization (LRM), and I quantify coseismic deformation within a fault zone by measuring the rotation of paleomagnetic vectors.

In Chapter 2 I construct a secular-variation reference curve for southern California. For this curve I measure three new well-constrained paleomagnetic directions: two from the Pallett Creek paleoseismological site at A.D. 1397-1480 and A.D. 1465-1495, and one from Panum Crater at A.D. 1325-1365. To these three directions I add the best nine data points from the Sternberg secular-variation curve, five data points from Champion, and one point from the A.D. 1480 eruption of Mt. St. Helens. I derive the error due to the non-dipole field that is added to these data by the geographical correction to southern California. Combining these yields a secular variation curve for southern California covering the period A.D. 670 to 1910, with the best coverage in the range A.D. 1064 to 1505.

In Chapter 3 I apply this curve to a problem in southern California. Two paleoseismological sites in the Salton trough of southern California have sediments deposited by prehistoric Lake Cahuilla. At the Salt Creek site I sampled sediments from three different lakes, and at the Indio site I sampled sediments from four different lakes. Based upon the coinciding paleomagnetic directions I correlate the oldest lake sampled at Salt Creek with the oldest lake sampled at Indio. Furthermore, the penultimate lake at Indio does not appear to be present at Salt Creek. Using the secular variation curve I can assign the lakes at Salt Creek to broad age ranges of A.D. 800 to 1100, A.D. 1100 to 1300, and A.D. 1300 to 1500. This example demonstrates the large uncertainties in the secular variation curve and the need to construct curves from a limited geographical area.

Chapter 4 demonstrates that seismically induced liquefaction can cause resetting of detrital remanent magnetization and acquisition of a liquefaction remanent magnetization (LRM). I sampled three different liquefaction features, a sandbody formed in the Elsinore fault zone, diapirs from sediments of Mono Lake, and a sandblow in these same sediments. In every case the liquefaction features showed stable magnetization despite substantial physical disruption. In addition, in the case of the sandblow and the sandbody, the intensity of the natural remanent magnetization increased by up to an order of magnitude.

In Chapter 5 I apply paleomagnetics to measuring the tectonic rotations in a 52 meter long transect across the San Andreas fault zone at the Pallett Creek paleoseismological site. This site has presented a significant problem because the brittle long-term average slip-rate across the fault is significantly less than the slip-rate from other nearby sites. I find sections adjacent to the fault with tectonic rotations of up to 30°. If interpreted as block rotations, the non-brittle offset was 14.0+2.8, -2.1 meters in the last three earthquakes and 8.5+1.0, -0.9 meters in the last two. Combined with the brittle offset in these events, the last three events all had about 6 meters of total fault offset, even though the intervals between them were markedly different.

In Appendix 1 I present a detailed description of my standard sampling and demagnetization procedure.

In Appendix 2 I present a detailed discussion of the study at Panum Crater that yielded the well-constrained paleomagnetic direction for use in developing secular variation curve in Chapter 2. In addition, from sampling two distinctly different clast types in a block-and-ash flow deposit from Panum Crater, I find that this flow had a complex emplacement and cooling history. Angular, glassy "lithic" blocks were emplaced at temperatures above 600° C. Some of these had cooled nearly completely, whereas others had cooled only to 450° C, when settling in the flow rotated the blocks slightly. The partially cooled blocks then finished cooling without further settling. Highly vesicular, breadcrusted pumiceous clasts had not yet cooled to 600° C at the time of these rotations, because they show a stable, well clustered, unidirectional magnetic vector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Insect vector-borne diseases, such as malaria and dengue fever (both spread by mosquito vectors), continue to significantly impact health worldwide, despite the efforts put forth to eradicate them. Suppression strategies utilizing genetically modified disease-refractory insects have surfaced as an attractive means of disease control, and progress has been made on engineering disease-resistant insect vectors. However, laboratory-engineered disease refractory genes would probably not spread in the wild, and would most likely need to be linked to a gene drive system in order to proliferate in native insect populations. Underdominant systems like translocations and engineered underdominance have been proposed as potential mechanisms for spreading disease refractory genes. Not only do these threshold-dependent systems have certain advantages over other potential gene drive mechanisms, such as localization of gene drive and removability, extreme engineered underdominance can also be used to bring about reproductive isolation, which may be of interest in controlling the spread of GMO crops. Proof-of-principle establishment of such drive mechanisms in a well-understood and studied insect, such as Drosophila melanogaster, is essential before more applied systems can be developed for the less characterized vector species of interest, such as mosquitoes. This work details the development of several distinct types of engineered underdominance and of translocations in Drosophila, including ones capable of bringing about reproductive isolation and population replacement, as a proof of concept study that can inform efforts to construct such systems in insect disease vectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past few decades, ferromagnetic spinwave resonance in magnetic thin films has been used as a tool for studying the properties of magnetic materials. A full understanding of the boundary conditions at the surface of the magnetic material is extremely important. Such an understanding has been the general objective of this thesis. The approach has been to investigate various hypotheses of the surface condition and to compare the results of these models with experimental data. The conclusion is that the boundary conditions are largely due to thin surface regions with magnetic properties different from the bulk. In the calculations these regions were usually approximated by uniform surface layers; the spins were otherwise unconstrained except by the same mechanisms that exist in the bulk (i.e., no special "pinning" at the surface atomic layer is assumed). The variation of the ferromagnetic spinwave resonance spectra in YIG films with frequency, temperature, annealing, and orientation of applied field provided an excellent experimental basis for the study.

This thesis can be divided into two parts. The first part is ferromagnetic resonance theory; the second part is the comparison of calculated with experimental data in YIG films. Both are essential in understanding the conclusion that surface regions with properties different from the bulk are responsible for the resonance phenomena associated with boundary conditions.

The theoretical calculations have been made by finding the wave vectors characteristic of the magnetic fields inside the magnetic medium, and then combining the fields associated with these wave vectors in superposition to match the specified boundary conditions. In addition to magnetic boundary conditions required for the surface layer model, two phenomenological magnetic boundary conditions are discussed in detail. The wave vectors are easily found by combining the Landau-Lifshitz equations with Maxwell's equations. Mode positions are most easily predicted from the magnetic wave vectors obtained by neglecting damping, conductivity, and the displacement current. For an insulator where the driving field is nearly uniform throughout the sample, these approximations permit a simple yet accurate calculation of the mode intensities. For metal films this calculation may be inaccurate but the mode positions are still accurately described. The techniques necessary for calculating the power absorbed by the film under a specific excitation including the effects of conductivity, displacement current and damping are also presented.

In the second part of the thesis the properties of magnetic garnet materials are summarized and the properties believed associated with the two surface regions of a YIG film are presented. Finally, the experimental data and calculated data for the surface layer model and other proposed models are compared. The conclusion of this study is that the remarkable variety of spinwave spectra that arises from various preparation techniques and subsequent treatments can be explained by surface regions with magnetic properties different from the bulk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.