28 resultados para Additional experiments

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis I investigate some aspects of the thermal budget of pahoehoe lava flows. This is done with a combination of general field observations, quantitative modeling, and specific field experiments. The results of this work apply to pahoehoe flows in general, even though the vast bulk of the work has been conducted on the lavas formed by the Pu'u 'O'o - Kupaianaha eruption of Kilauea Volcano on Hawai'i. The field observations rely heavily on discussions with the staff of the United States Geological Survey's Hawaiian Volcano Observatory (HVO), under whom I labored repeatedly in 1991-1993 for a period totaling about 10 months.

The quantitative models I have constructed are based on the physical processes observed by others and myself to be active on pahoehoe lava flows. By building up these models from the basic physical principles involved, this work avoids many of the pitfalls of earlier attempts to fit field observations with "intuitively appropriate" mathematical expressions. Unlike many earlier works, my model results can be analyzed in terms of the interactions between the different physical processes. I constructed models to: (1) describe the initial cooling of small pahoehoe flow lobes and (2) understand the thermal budget of lava tubes.

The field experiments were designed either to validate model results or to constrain key input parameters. In support of the cooling model for pahoehoe flow lobes, attempts were made to measure: (1) the cooling within the flow lobes, (2) the amount of heat transported away from the lava by wind, and (3) the growth of the crust on the lobes. Field data collected by Jones [1992], Hon et al. [1994b], and Denlinger [Keszthelyi and Denlinger, in prep.] were also particularly useful in constraining my cooling model for flow lobes. Most of the field observations I have used to constrain the thermal budget of lava tubes were collected by HVO (geological and geophysical monitoring) and the Jet Propulsion Laboratory (airborne infrared imagery [Realmuto et al., 1992]). I was able to assist HVO for part of their lava tube monitoring program and also to collect helicopterborne and ground-based IR video in collaboration with JPL [Keszthelyi et al., 1993].

The most significant results of this work are (1) the quantitative demonstration that the emplacement of pahoehoe and 'a'a flows are the fundamentally different, (2) confirmation that even the longest lava flows observed in our Solar System could have formed as low effusion rate, tube-fed pahoehoe flows, and (3) the recognition that the atmosphere plays a very important role throughout the cooling of history of pahoehoe lava flows. In addition to answering specific questions about the thermal budget of tube-fed pahoehoe lava flows, this thesis has led to some additional, more general, insights into the emplacement of these lava flows. This general understanding of the tube-fed pahoehoe lava flow as a system has suggested foci for future research in this part of physical volcanology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oligonucleotide-directed triple helix formation is one of the most versatile methods for the sequence specific recognition of double helical DNA. Chapter 2 describes affinity cleaving experiments carried out to assess the recognition potential for purine-rich oligonucleotides via the formation of triple helices. Purine-rich oligodeoxyribonucleotides were shown to bind specifically to purine tracts of double helical DNA in the major groove antiparallel to the purine strand of the duplex. Specificity was derived from the formation of reverse Hoogsteen G•GC, A•AT and T•AT triplets and binding was limited to mostly purine tracts. This triple helical structure was stabilized by multivalent cations, destabilized by high concentrations of monovalent cations and was insensitive to pH. A single mismatched base triplet was shown to destabilize a 15 mer triple helix by 1.0 kcal/mole at 25°C. In addition, stability appeared to be correlated to the number of G•GC triplets formed in the triple helix. This structure provides an additional framework as a basis for the design of new sequence specific DNA binding molecules.

In work described in Chapter 3, the triplet specificities and required strand orientations of two classes of DNA triple helices were combined to target double helical sequences containing all four base pairs by alternate strand triple helix formation. This allowed for the use of oligonucleotides containing only natural 3'-5' phosphodiester linkages to simultaneously bind both strands of double helical DNA in the major groove. The stabilities and structures of these alternate strand triple helices depended on whether the binding site sequence was 5'-(purine)_m (pyrimidine)_n-3' or 5'- (pyrimidine)_m (purine)_n-3'.

In Chapter 4, the ability of oligonucleotide-cerium(III) chelates to direct the transesterfication of RNA was investigated. Procedures were developed for the modification of DNA and RNA oligonucleotides with a hexadentate Schiff-base macrocyclic cerium(III) complex. In addition, oligoribonucleotides modified by covalent attachment of the metal complex through two different linker structures were prepared. The ability of these structures to direct transesterification to specific RNA phosphodiesters was assessed by gel electrophoresis. No reproducible cleavage of the RNA strand consistent with transesterification could be detected in any of these experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I of this thesis, a new magnetic spectrometer experiment which measured the β spectrum of ^(35)S is described. New limits on heavy neutrino emission in nuclear β decay were set, for a heavy neutrino mass range between 12 and 22 keV. In particular, this measurement rejects the hypothesis that a 17 keV neutrino is emitted, with sin^2 θ = 0.0085, at the 6δ statistical level. In addition, an auxiliary experiment was performed, in which an artificial kink was induced in the β spectrum by means of an absorber foil which masked a fraction of the source area. In this measurement, the sensitivity of the magnetic spectrometer to the spectral features of heavy neutrino emission was demonstrated.

In Part II, a measurement of the neutron spallation yield and multiplicity by the Cosmic-ray Underground Background Experiment is described. The production of fast neutrons by muons was investigated at an underground depth of 20 meters water equivalent, with a 200 liter detector filled with 0.09% Gd-loaded liquid scintillator. We measured a neutron production yield of (3.4 ± 0.7) x 10^(-5) neutrons per muon-g/cm^2, in agreement with other experiments. A single-to-double neutron multiplicity ratio of 4:1 was observed. In addition, stopped π^+ decays to µ^+ and then e^+ were observed as was the associated production of pions and neutrons, by the muon spallation interaction. It was seen that practically all of the π^+ produced by muons were also accompanied by at least one neutron. These measurements serve as the basis for neutron background estimates for the San Onofre neutrino detector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel Ca^(2+)-binding protein with Mr of 23 K (designated p23) has been identified in avian erythrocytes and thrombocytes. p23 localizes to the marginal bands (MBs), centrosomes and discrete sites around the nuclear membrane in mature avian erythrocytes. p23 appears to bind Ca^(2+) directly and its interaction with subcellular organelles seems to be modulated by intracellular [Ca^(2+)]. However, its unique protein sequence lacks any known Ca^(2+)-binding motif. Developmental analysis reveals that p23 association to its target structures occurs only at very late stages of bone marrow definitive erythropoeisis. In primitive erythroid cells, p23 distributes diffusely in the cytoplasm and lacks any distinct localization. It is postulated that p23 association to subcellular structures may be induced in part by decreased intracellular [Ca^(2+)]. In vitro and in vivo experiments indicate that p23 does not appear to act as a classical microtubule-associated protein (MAP) but p23 homologues appear to be expressed in MB-containing cells of a variety of species from different vertebrate classes. It has been hypothesized that p23 may play a regulatory role in MB stabilization in a Ca^(2+)-dependent manner.

Binucleated (bnbn) turkey erythrocytes were found to express a truncated p23 variant (designated p21) with identical subcellular localization as p23 except immunostaining reveals the presence of multi-centrosomes in bnbn cells. The p21 sequence has a 62 amino acid deletion at the C-terminus and must therefore have an additional ~40 amino acids at the N-terminus. In addition, p21 seems to have lost the ability to bind Ca^(2+) and its supramolecular interactions are not modulated by intracellular [Ca^(2+)]. These apparent differences between p23 and p21 raised the possibility that the p23/p21 allelism could be the Bn/bn genotype. However, genetic analysis suggested that p23/p21 allelism had no absolute correlation with the Bn/bn genotype.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic structure above and below the core-mantle boundary (CMB) has been studied through use of travel time and waveform analyses of several different seismic wave groups. Anomalous systematic trends in observables document mantle heterogeneity on both large and small scales. Analog and digital data has been utilized, and in many cases the analog data has been optically scanned and digitized prior to analysis.

Differential travel times of S - SKS are shown to be an excellent diagnostic of anomalous lower mantle shear velocity (V s) structure. Wavepath geometries beneath the central Pacific exhibit large S- SKS travel time residuals (up to 10 sec), and are consistent with a large scale 0(1000 km) slower than average V_s region (≥3%). S - SKS times for paths traversing this region exhibit smaller scale patterns and trends 0(100 km) indicating V_s perturbations on many scale lengths. These times are compared to predictions of three tomographically derived aspherical models: MDLSH of Tanimoto [1990], model SH12_WM13 of Suet al. [1992], and model SH.10c.17 of Masters et al. [1992]. Qualitative agreement between the tomographic model predictions and observations is encouraging, varying from fair to good. However, inconsistencies are present and suggest anomalies in the lower mantle of scale length smaller than the present 2000+ km scale resolution of tomographic models. 2-D wave propagation experiments show the importance of inhomogeneous raypaths when considering lateral heterogeneities in the lowermost mantle.

A dataset of waveforms and differential travel times of S, ScS, and the arrival from the D" layer, Scd, provides evidence for a laterally varying V_s velocity discontinuity at the base of the mantle. Two different localized D" regions beneath the central Pacific have been investigated. Predictions from a model having a V_s discontinuity 180 km above the CMB agree well with observations for an eastern mid-Pacific CMB region. This thickness differs from V_s discontinuity thicknesses found in other regions, such as a localized region beneath the western Pacific, which average near 280 km. The "sharpness" of the V_s jump at the top of D", i.e., the depth range over which the V_s increase occurs, is not resolved by our data, and our data can in fact may be modeled equally well by a lower mantle with the increase in V_s at the top of D" occurring over a 100 krn depth range. It is difficult at present to correlate D" thicknesses from this study to overall lower mantle heterogeneity, due to uncertainties in the 3-D models, as well as poor coverage in maps of D" discontinuity thicknesses.

P-wave velocity structure (V_p) at the base of the mantle is explored using the seismic phases SKS and SPdKS. SPdKS is formed when SKS waves at distances around 107° are incident upon the CMB with a slowness that allows for coupling with diffracted P-waves at the base of the mantle. The P-wave diffraction occurs at both the SKS entrance and exit locations of the outer core. SP_dKS arrives slightly later in time than SKS, having a wave path through the mantle and core very close to SKS. The difference time between SKS and SP_dKS strongly depends on V_p at the base of the mantle near SK Score entrance and exit points. Observations from deep focus Fiji-Tonga events recorded by North American stations, and South American events recorded by European and Eurasian stations exhibit anomalously large SP_dKS - SKS difference times. SKS and the later arriving SP_dKS phase are separated by several seconds more than predictions made by 1-D reference models, such as the global average PREM [Dziewonski and Anderson, 1981] model. Models having a pronounced low-velocity zone (5%) in V_p in the bottom 50-100 km of the mantle predict the size of the observed SP_dK S-SKS anomalies. Raypath perturbations from lower mantle V_s structure may also be contributing to the observed anomalies.

Outer core structure is investigated using the family of SmKS (m=2,3,4) seismic waves. SmKS are waves that travel as S-waves in the mantle, P-waves in the core, and reflect (m-1) times on the underside of the CMB, and are well-suited for constraining outermost core V_p structure. This is due to closeness of the mantle paths and also the shallow depth range these waves travel in the outermost core. S3KS - S2KS and S4KS - S3KS differential travel times were measured using the cross-correlation method and compared to those from reflectivity synthetics created from core models of past studies. High quality recordings from a deep focus Java Sea event which sample the outer core beneath the northern Pacific, the Arctic, and northwestern North America (spanning 1/8th of the core's surface area), have SmKS wavepaths that traverse regions where lower mantle heterogeneity is pre- dieted small, and are well-modeled by the PREM core model, with possibly a small V_p decrease (1.5%) in the outermost 50 km of the core. Such a reduction implies chemical stratification in this 50 km zone, though this model feature is not uniquely resolved. Data having wave paths through areas of known D" heterogeneity (±2% and greater), such as the source-side of SmKS lower mantle paths from Fiji-Tonga to Eurasia and Africa, exhibit systematic SmKS differential time anomalies of up to several seconds. 2-D wave propagation experiments demonstrate how large scale lower mantle velocity perturbations can explain long wavelength behavior of such anomalous SmKS times. When improperly accounted for, lower mantle heterogeneity maps directly into core structure. Raypaths departing from homogeneity play an important role in producing SmKS anomalies. The existence of outermost core heterogeneity is difficult to resolve at present due to uncertainties in global lower mantle structure. Resolving a one-dimensional chemically stratified outermost core also remains difficult due to the same uncertainties. Restricting study to higher multiples of SmKS (m=2,3,4) can help reduce the affect of mantle heterogeneity due to the closeness of the mantle legs of the wavepaths. SmKS waves are ideal in providing additional information on the details of lower mantle heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.

In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sea urchin embryonic skeleton, or spicule, is deposited by mesenchymal progeny of four precursor cells, the micromeres, which are determined to the skeletogenic pathway by a process known as cytoplasmic localization. A gene encoding one of the major products of the skeletogenic mesenchyme, a prominent 50 kD protein of the spicule matrix, has been characterized in detail. cDNA clones were first isolated by antibody screening of a phage expression library, followed by isolation of homologous genomic clones. The gene, known as SM50, is single copy in the sea urchin genome, is divided into two exons of 213 and 1682 bp, and is expressed only in skeletogenic cells. Transcripts are first detectable at the 120 cell stage, shortly after the segregation of the skeletogenic precursors from the rest of the embryo. The SM50 open reading frame begins within the first exon, is 450 amino acids in length, and contains a loosely repeated 13 amino acid motif rich in acidic residues which accounts for 45% of the protein and which is possibly involved in interaction with the mineral phase of the spicule.

The important cis-acting regions of the SM50 gene necessary for proper regulation of expression were identified by gene transfer experiments. A 562 bp promoter fragment, containing 438 bp of 5' promoter sequence and 124 bp of the SM50 first exon (including the SM50 initiation codon), was both necessary and sufficient to direct high levels of expression of the bacterial chloramphenicol acetyltransferase (CAT) reporter gene specifically in the skeletogenic cells. Removal of promoter sequences between positions -2200 and -438, and of transcribed regions downstream of +124 (including the SM50 intron), had no effect on the spatial or transcriptional activity of the transgenes.

Regulatory proteins that interact with the SM50 promoter were identified by the gel retardation assay, using bulk embryo mesenchyme blastula stage nuclear proteins. Five protein binding sites were identified and mapped to various degrees of resolution. Two sites are homologous, may be enhancer elements, and at least one is required for expression. Two additional sites are also present in the promoter of the aboral ectoderm specific cytoskeletal actin gene CyIIIa; one of these is a CCAA T element, the other a putative repressor element. The fifth site overlaps the binding site of the putative repressor and may function as a positive regulator by interfering with binding of the repressor. All of the proteins are detectable in nuclear extracts prepared from 64 cell stage embryos, a stage just before expression of SM50 is initiated, as well as from blastula and gastrula stage; the putative enhancer binding protein may be maternal as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is divided into two independent papers.

PAPER 1.

Spall velocities were measured for nine experimental impacts into San Marcos gabbro targets. Impact velocities ranged from 1 to 6.5 km/sec. Projectiles were iron, aluminum, lead, and basalt of varying sizes. The projectile masses ranged from a 4 g lead bullet to a 0.04 g aluminum sphere. The velocities of fragments were measured from high-speed films taken of the events. The maximum spall velocity observed was 30 m/sec, or 0.56 percent of the 5.4 km/sec impact velocity. The measured velocities were compared to the spall velocities predicted by the spallation model of Melosh (1984). The compatibility between the spallation model for large planetary impacts and the results of these small scale experiments are considered in detail.

The targets were also bisected to observe the pattern of internal fractures. A series of fractures were observed, whose location coincided with the boundary between rock subjected to the peak shock compression and a theoretical "near surface zone" predicted by the spallation model. Thus, between this boundary and the free surface, the target material should receive reduced levels of compressive stress as compared to the more highly shocked region below.

PAPER 2.

Carbonate samples from the nuclear explosion crater, OAK, and a terrestrial impact crater, Meteor Crater, were analyzed for shock damage using electron para- magnetic resonance, EPR. The first series of samples for OAK Crater were obtained from six boreholes within the crater, and the second series were ejecta samples recovered from the crater floor. The degree of shock damage in the carbonate material was assessed by comparing the sample spectra to spectra of Solenhofen limestone, which had been shocked to known pressures.

The results of the OAK borehole analysis have identified a thin zone of highly shocked carbonate material underneath the crater floor. This zone has a maximum depth of approximately 200 ft below sea floor at the ground zero borehole and decreases in depth towards the crater rim. A layer of highly shocked material is also found on the surface in the vicinity of the reference bolehole, located outside the crater. This material could represent a fallout layer. The ejecta samples have experienced a range of shock pressures.

It was also demonstrated that the EPR technique is feasible for the study of terrestrial impact craters formed in carbonate bedrock. The results for the Meteor Crater analysis suggest a slight degree of shock damage present in the β member of the Kaibab Formation exposed in the crater walls.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sources and effects of astrophysical gravitational radiation are explained briefly to motivate discussion of the Caltech 40 meter antenna, which employs laser interferometry to monitor proper distances between inertial test masses. Practical considerations in construction of the apparatus are described. Redesign of test mass systems has resulted in a reduction of noise from internal mass vibrations by up to two orders of magnitude at some frequencies. A laser frequency stabilization system was developed which corrects the frequency of an argon ion laser to a residual fluctuation level bounded by the spectral density √s_v(f) ≤ 60µHz/√Hz, at fluctuation frequencies near 1.2 kHz. These and other improvements have contributed to reducing the spectral density of equivalent gravitational wave strain noise to √s_h(f)≈10^(-19)/√ Hz at these frequencies.

Finally, observations made with the antenna in February and March of 1987 are described. Kilohertz-band gravitational waves produced by the remnant of the recent supernova are shown to be theoretically unlikely at the strength required for confident detection in this antenna (then operating at poorer sensitivity than that quoted above). A search for periodic waves in the recorded data, comprising Fourier analysis of four 105-second samples of the antenna strain signal, was used to place new upper limits on periodic gravitational radiation at frequencies between 305 Hz and 5 kHz. In particular, continuous waves of any polarization are ruled out above strain amplitudes of 1.2 x 10^(-18) R.M.S. for waves emanating from the direction of the supernova, and 6.2 x 10^(-19) R.M.S. for waves emanating from the galactic center, between 1.5 and 4 kilohertz. Between 305 Hz and 5kHz no strains greater than 1.2 x 10^(-17) R.M.S. were detected from either direction. Limitations of the analysis and potential improvements are discussed, as are prospects for future searches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.

Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.

Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.

Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes studies surrounding a ligand-gated ion channel (LGIC): the serotonin type 3A receptor (5-HT3AR). Structure-function experiments using unnatural amino acid mutagenesis are described, as well as experiments on the methodology of unnatural amino acid mutagenesis. Chapter 1 introduces LGICs, experimental methods, and an overview of the unnatural amino acid mutagenesis.

In Chapter 2, the binding orientation of the clinically available drugs ondansetron and granisetron within 5-HT3A is determined through a combination of unnatural amino acid mutagenesis and an inhibition based assay. A cation-π interaction is found for both ondansetron and granisetron with a specific tryptophan residue (Trp183, TrpB) of the mouse 5-HT3AR, which establishes a binding orientation for these drugs.

In Chapter 3, further studies were performed with ondansetron and granisetron with 5-HT3A. The primary determinant of binding for these drugs was determined to not include interactions with a specific tyrosine residue (Tyr234, TyrC2). In completing these studies, evidence supporting a cation-π interaction of a synthetic agonist, meta-chlorophenylbiguanide, was found with TyrC2.

In Chapter 4, a direct chemical acylation strategy was implemented to prepare full-length suppressor tRNA mediated by lanthanum(III) and amino acid phosphate esters. The derived aminoacyl-tRNA is shown to be translationally competent in Xenopus oocytes.

Appendix A.1 gives details of a pharmacological method for determining the equilibrium dissociation constant, KB, of a competitive antagonist with a receptor, known as Schild analysis. Appendix A.2 describes an examination of the inhibitory activity of new chemical analogs of the 5-HT3A antagonist ondansetron. Appendix A.3 reports an organic synthesis of an intermediate for a new unnatural amino acid. Appendix A.4 covers an additional methodological examination for the preparation of amino-acyl tRNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main focus of this thesis is the use of high-throughput sequencing technologies in functional genomics (in particular in the form of ChIP-seq, chromatin immunoprecipitation coupled with sequencing, and RNA-seq) and the study of the structure and regulation of transcriptomes. Some parts of it are of a more methodological nature while others describe the application of these functional genomic tools to address various biological problems. A significant part of the research presented here was conducted as part of the ENCODE (ENCyclopedia Of DNA Elements) Project.

The first part of the thesis focuses on the structure and diversity of the human transcriptome. Chapter 1 contains an analysis of the diversity of the human polyadenylated transcriptome based on RNA-seq data generated for the ENCODE Project. Chapter 2 presents a simulation-based examination of the performance of some of the most popular computational tools used to assemble and quantify transcriptomes. Chapter 3 includes a study of variation in gene expression, alternative splicing and allelic expression bias on the single-cell level and on a genome-wide scale in human lymphoblastoid cells; it also brings forward a number of critical to the practice of single-cell RNA-seq measurements methodological considerations.

The second part presents several studies applying functional genomic tools to the study of the regulatory biology of organellar genomes, primarily in mammals but also in plants. Chapter 5 contains an analysis of the occupancy of the human mitochondrial genome by TFAM, an important structural and regulatory protein in mitochondria, using ChIP-seq. In Chapter 6, the mitochondrial DNA occupancy of the TFB2M transcriptional regulator, the MTERF termination factor, and the mitochondrial RNA and DNA polymerases is characterized. Chapter 7 consists of an investigation into the curious phenomenon of the physical association of nuclear transcription factors with mitochondrial DNA, based on the diverse collections of transcription factor ChIP-seq datasets generated by the ENCODE, mouseENCODE and modENCODE consortia. In Chapter 8 this line of research is further extended to existing publicly available ChIP-seq datasets in plants and their mitochondrial and plastid genomes.

The third part is dedicated to the analytical and experimental practice of ChIP-seq. As part of the ENCODE Project, a set of metrics for assessing the quality of ChIP-seq experiments was developed, and the results of this activity are presented in Chapter 9. These metrics were later used to carry out a global analysis of ChIP-seq quality in the published literature (Chapter 10). In Chapter 11, the development and initial application of an automated robotic ChIP-seq (in which these metrics also played a major role) is presented.

The fourth part presents the results of some additional projects the author has been involved in, including the study of the role of the Piwi protein in the transcriptional regulation of transposon expression in Drosophila (Chapter 12), and the use of single-cell RNA-seq to characterize the heterogeneity of gene expression during cellular reprogramming (Chapter 13).

The last part of the thesis provides a review of the results of the ENCODE Project and the interpretation of the complexity of the biochemical activity exhibited by mammalian genomes that they have revealed (Chapters 15 and 16), an overview of the expected in the near future technical developments and their impact on the field of functional genomics (Chapter 14), and a discussion of some so far insufficiently explored research areas, the future study of which will, in the opinion of the author, provide deep insights into many fundamental but not yet completely answered questions about the transcriptional biology of eukaryotes and its regulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.