24 resultados para Exponential Sorting

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents the development of chip-based technology for informative in vitro cancer diagnostics. In the first part of this thesis, I will present my contribution in the development of a technology called “Nucleic Acid Cell Sorting (NACS)”, based on microarrays composed of nucleic acid encoded peptide major histocompatibility complexes (p/MHC), and the experimental and theoretical methods to detect and analyze secreted proteins from single or few cells.

Secondly, a novel portable platform for imaging of cellular metabolism with radio probes is presented. A microfluidic chip, so called “Radiopharmaceutical Imaging Chip” (RIMChip), combined with a beta-particle imaging camera, is developed to visualize the uptake of radio probes in a small number of cells. Due to its sophisticated design, RIMChip allows robust and user-friendly execution of sensitive and quantitative radio assays. The performance of this platform is validated with adherent and suspension cancer cell lines. This platform is then applied to study the metabolic response of cancer cells under the treatment of drugs. Both cases of mouse lymphoma and human glioblastoma cell lines, the metabolic responses to the drug exposures are observed within a short time (~ 1 hour), and are correlated with the arrest of cell-cycle, or with changes in receptor tyrosine kinase signaling.

The last parts of this thesis present summaries of ongoing projects: development of a new agent as an in vivo imaging probe for c-MET, and quantitative monitoring of glycolytic metabolism of primary glioblastoma cells. To develop a new agent for c-MET imaging, the one-bead-one-compound combinatorial library method is used, coupled with iterative screening. The performance of the agent is quantitatively validated with cell-based fluorescent assays. In the case of monitoring the metabolism of primary glioblastoma cell, by RIMChip, cells were sorting according to their expression levels of oncoprotein, or were treated with different kinds of drugs to study the metabolic heterogeneity of cancer cells or metabolic response of glioblastoma cells to drug treatments, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the following singularly perturbed linear two-point boundary-value problem:

Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)

By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)

Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.

A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.

Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of this thesis a study of the effect of the longitudinal distribution of optical intensity and electron density on the static and dynamic behavior of semiconductor lasers is performed. A static model for above threshold operation of a single mode laser, consisting of multiple active and passive sections, is developed by calculating the longitudinal optical intensity distribution and electron density distribution in a self-consistent manner. Feedback from an index and gain Bragg grating is included, as well as feedback from discrete reflections at interfaces and facets. Longitudinal spatial holeburning is analyzed by including the dependence of the gain and the refractive index on the electron density. The mechanisms of spatial holeburning in quarter wave shifted DFB lasers are analyzed. A new laser structure with a uniform optical intensity distribution is introduced and an implementation is simulated, resulting in a large reduction of the longitudinal spatial holeburning effect.

A dynamic small-signal model is then developed by including the optical intensity and electron density distribution, as well as the dependence of the grating coupling coefficients on the electron density. Expressions are derived for the intensity and frequency noise spectrum, the spontaneous emission rate into the lasing mode, the linewidth enhancement factor, and the AM and FM modulation response. Different chirp components are identified in the FM response, and a new adiabatic chirp component is discovered. This new adiabatic chirp component is caused by the nonuniform longitudinal distributions, and is found to dominate at low frequencies. Distributed feedback lasers with partial gain coupling are analyzed, and it is shown how the dependence of the grating coupling coefficients on the electron density can result in an enhancement of the differential gain with an associated enhancement in modulation bandwidth and a reduction in chirp.

In the second part, spectral characteristics of passively mode-locked two-section multiple quantum well laser coupled to an external cavity are studied. Broad-band wavelength tuning using an external grating is demonstrated for the first time in passively mode-locked semiconductor lasers. A record tuning range of 26 nm is measured, with pulse widths of typically a few picosecond and time-bandwidth products of more than 10 times the transform limit. It is then demonstrated that these large time-bandwidth products are due to a strong linear upchirp, by performing pulse compression by a factor of 15 to a record pulse widths as low 320 fs.

A model for pulse propagation through a saturable medium with self-phase-modulation, due to the a-parameter, is developed for quantum well material, including the frequency dependence of the gain medium. This model is used to simulate two-section devices coupled to an external cavity. When no self-phase-modulation is present, it is found that the pulses are asymmetric with a sharper rising edge, that the pulse tails have an exponential behavior, and that the transform limit is 0.3. Inclusion of self-phase-modulation results in a linear upchirp imprinted on the pulse after each round-trip. This linear upchirp is due to a combination of self-phase-modulation in a gain section and absorption of the leading edge of the pulse in the saturable absorber.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today our understanding of the vibrational thermodynamics of materials at low temperatures is emerging nicely, based on the harmonic model in which phonons are independent. At high temperatures, however, this understanding must accommodate how phonons interact with other phonons or with other excitations. We shall see that the phonon-phonon interactions give rise to interesting coupling problems, and essentially modify the equilibrium and non-equilibrium properties of materials, e.g., thermodynamic stability, heat capacity, optical properties and thermal transport of materials. Despite its great importance, to date the anharmonic lattice dynamics is poorly understood and most studies on lattice dynamics still rely on the harmonic or quasiharmonic models. There have been very few studies on the pure phonon anharmonicity and phonon-phonon interactions. The work presented in this thesis is devoted to the development of experimental and computational methods on this subject.

Modern inelastic scattering techniques with neutrons or photons are ideal for sorting out the anharmonic contribution. Analysis of the experimental data can generate vibrational spectra of the materials, i.e., their phonon densities of states or phonon dispersion relations. We obtained high quality data from laser Raman spectrometer, Fourier transform infrared spectrometer and inelastic neutron spectrometer. With accurate phonon spectra data, we obtained the energy shifts and lifetime broadenings of the interacting phonons, and the vibrational entropies of different materials. The understanding of them then relies on the development of the fundamental theories and the computational methods.

We developed an efficient post-processor for analyzing the anharmonic vibrations from the molecular dynamics (MD) calculations. Currently, most first principles methods are not capable of dealing with strong anharmonicity, because the interactions of phonons are ignored at finite temperatures. Our method adopts the Fourier transformed velocity autocorrelation method to handle the big data of time-dependent atomic velocities from MD calculations, and efficiently reconstructs the phonon DOS and phonon dispersion relations. Our calculations can reproduce the phonon frequency shifts and lifetime broadenings very well at various temperatures.

To understand non-harmonic interactions in a microscopic way, we have developed a numerical fitting method to analyze the decay channels of phonon-phonon interactions. Based on the quantum perturbation theory of many-body interactions, this method is used to calculate the three-phonon and four-phonon kinematics subject to the conservation of energy and momentum, taking into account the weight of phonon couplings. We can assess the strengths of phonon-phonon interactions of different channels and anharmonic orders with the calculated two-phonon DOS. This method, with high computational efficiency, is a promising direction to advance our understandings of non-harmonic lattice dynamics and thermal transport properties.

These experimental techniques and theoretical methods have been successfully performed in the study of anharmonic behaviors of metal oxides, including rutile and cuprite stuctures, and will be discussed in detail in Chapters 4 to 6. For example, for rutile titanium dioxide (TiO2), we found that the anomalous anharmonic behavior of the B1g mode can be explained by the volume effects on quasiharmonic force constants, and by the explicit cubic and quartic anharmonicity. For rutile tin dioxide (SnO2), the broadening of the B2g mode with temperature showed an unusual concave downwards curvature. This curvature was caused by a change with temperature in the number of down-conversion decay channels, originating with the wide band gap in the phonon dispersions. For silver oxide (Ag2O), strong anharmonic effects were found for both phonons and for the negative thermal expansion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We know from the CMB and observations of large-scale structure that the universe is extremely flat, homogenous, and isotropic. The current favored mechanism for generating these characteristics is inflation, a theorized period of exponential expansion of the universe that occurred shortly after the Big Bang. Most theories of inflation generically predict a background of stochastic gravitational waves. These gravitational waves should leave their unique imprint on the polarization of the CMB via Thompson scattering. Scalar perturbations of the metric will cause a pattern of polarization with no curl (E-mode). Tensor perturbations (gravitational waves) will cause a unique pattern of polarization on the CMB that includes a curl component (B-mode). A measurement of the ratio of the tensor to scalar perturbations (r) tells us the energy scale of inflation. Recent measurements by the BICEP2 team detect the B-mode spectrum with a tensor-to-scalar ratio of r = 0.2 (+0.05, −0.07). An independent confirmation of this result is the next step towards understanding the inflationary universe.

This thesis describes my work on a balloon-borne polarimeter called SPIDER, which is designed to illuminate the physics of the early universe through measurements of the cosmic microwave background polarization. SPIDER consists of six single-frequency, on-axis refracting telescopes contained in a shared-vacuum liquid-helium cryostat. Its large format arrays of millimeter-wave detectors and tight control of systematics will give it unprecedented sensitivity. This thesis describes how the SPIDER detectors are characterized and calibrated for flight, as well as how the systematics requirements for the SPIDER system are simulated and measured.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detailed pulsed neutron measurements have been performed in graphite assemblies ranging in size from 30.48 cm x 38.10 cm x 38.10 cm to 91.44 cm x 66.67 cm x 66.67 cm. Results of the measurement have been compared to a modeled theoretical computation.

In the first set of experiments, we measured the effective decay constant of the neutron population in ten graphite stacks as a function of time after the source burst. We found the decay to be non-exponential in the six smallest assemblies, while in three larger assemblies the decay was exponential over a significant portion of the total measuring interval. The decay in the largest stack was exponential over the entire ten millisecond measuring interval. The non-exponential decay mode occurred when the effective decay constant exceeded 1600 sec^( -1).

In a second set of experiments, we measured the spatial dependence of the neutron population in four graphite stacks as a function of time after the source pulse. By doing an harmonic analysis of the spatial shape of the neutron distribution, we were able to compute the effective decay constants of the first two spatial modes. In addition, we were able to compute the time dependent effective wave number of neutron distribution in the stacks.

Finally, we used a Laplace transform technique and a simple modeled scattering kernel to solve a diffusion equation for the time and energy dependence of the neutron distribution in the graphite stacks. Comparison of these theoretical results with the results of the first set of experiments indicated that more exact theoretical analysis would be required to adequately describe the experiments.

The implications of our experimental results for the theory of pulsed neutron experiments in polycrystalline media are discussed in the last chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I of this thesis deals with 3 topics concerning the luminescence from bound multi-exciton complexes in Si. Part II presents a model for the decay of electron-hole droplets in pure and doped Ge.

Part I.

We present high resolution photoluminescence data for Si doped With Al, Ga, and In. We observe emission lines due to recombination of electron-hole pairs in bound excitons and satellite lines which have been interpreted in terms of complexes of several excitons bound to an impurity. The bound exciton luminescence in Si:Ga and Si:Al consists of three emission lines due to transitions from the ground state and two low lying excited states. In Si:Ga, we observe a second triplet of emission lines which precisely mirror the triplet due to the bound exciton. This second triplet is interpreted as due to decay of a two exciton complex into the bound exciton. The observation of the second complete triplet in Si:Ga conclusively demonstrates that more than one exciton will bind to an impurity. Similar results are found for Si:Al. The energy of the lines show that the second exciton is less tightly bound than the first in Si:Ga. Other lines are observed at lower energies. The assumption of ground state to ground-state transitions for the lower energy lines is shown to produce a complicated dependence of binding energy of the last exciton on the number of excitons in a complex. No line attributable to the decay of a two exciton complex is observed in Si:In.

We present measurements of the bound exciton lifetimes for the four common acceptors in Si and for the first two bound multi-exciton complexes in Si:Ga and Si:Al. These results are shown to be in agreement with a calculation by Osbourn and Smith of Auger transition rates for acceptor bound excitons in Si. Kinetics determine the relative populations of complexes of various sizes and work functions, at temperatures which do not allow them to thermalize with respect to one another. It is shown that kinetic limitations may make it impossible to form two-exciton complexes in Si:In from a gas of free excitons.

We present direct thermodynamic measurements of the work functions of bound multi-exciton complexes in Al, B, P and Li doped Si. We find that in general the work functions are smaller than previously believed. These data remove one obstacle to the bound multi-exciton complex picture which has been the need to explain the very large apparent work functions for the larger complexes obtained by assuming that some of the observed lines are ground-state to ground-state transitions. None of the measured work functions exceed that of the electron-hole liquid.

Part II.

A new model for the decay of electron-hole-droplets in Ge is presented. The model is based on the existence of a cloud of droplets within the crystal and incorporates exciton flow among the drops in the cloud and the diffusion of excitons away from the cloud. It is able to fit the experimental luminescence decays for pure Ge at different temperatures and pump powers while retaining physically reasonable parameters for the drops. It predicts the shrinkage of the cloud at higher temperatures which has been verified by spatially and temporally resolved infrared absorption experiments. The model also accounts for the nearly exponential decay of electron-hole-droplets in lightly doped Ge at higher temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conduction through TiO2 films of thickness 100 to 450 Å have been investigated. The samples were prepared by either anodization of Ti evaporation of TiO2, with Au or Al evaporated for contacts. The anodized samples exhibited considerable hysteresis due to electrical forming, however it was possible to avoid this problem with the evaporated samples from which complete sets of experimental results were obtained and used in the analysis. Electrical measurements included: the dependence of current and capacitance on dc voltage and temperature; the dependence of capacitance and conductance on frequency and temperature; and transient measurements of current and capacitance. A thick (3000 Å) evaporated TiO2 film was used for measuring the dielectric constant (27.5) and the optical dispersion, the latter being similar to that for rutile. An electron transmission diffraction pattern of a evaporated film indicated an essentially amorphous structure with a short range order that could be related to rutile. Photoresponse measurements indicated the same band gap of about 3 ev for anodized and evaporated films and reduced rutile crystals and gave the barrier energies at the contacts.

The results are interpreted in a self consistent manner by considering the effect of a large impurity concentration in the films and a correspondingly large ionic space charge. The resulting potential profile in the oxide film leads to a thermally assisted tunneling process between the contacts and the interior of the oxide. A general relation is derived for the steady state current through structures of this kind. This in turn is expressed quantitatively for each of two possible limiting types of impurity distributions, where one type gives barriers of an exponential shape and leads to quantitative predictions in c lose agreement with the experimental results. For films somewhat greater than 100 Å, the theory is formulated essentially in terms of only the independently measured barrier energies and a characteristic parameter of the oxide that depends primarily on the maximum impurity concentration at the contacts. A single value of this parameter gives consistent agreement with the experimentally observed dependence of both current and capacitance on dc voltage and temperature, with the maximum impurity concentration found to be approximately the saturation concentration quoted for rutile. This explains the relative insensitivity of the electrical properties of the films on the exact conditions of formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject of this thesis is electronic coupling in donor-bridge-acceptor systems. In Chapter 2, ET properties of cyanide-bridged dinuclear ruthenium complexes were investigated. The strong interaction between the mixed-valent ruthenium centers leads to intense metal-to-metal charge transfer bands (MMCT). Hush analysis of the MMCT absorption bands yields the electronic-coupling strength between the metal centers (H_(AB)) and the total reorganization energy (λ). Comparison of ET kinetics to calculated rates shows that classical ET models fail to account for the observed kinetics and nuclear tunneling must be considered.

In Chapter 3, ET rates were measured in four ruthenium-modified highpotential iron-sulfur proteins (HiPIP), which were modified at position His50, His81, His42 and His18, respectively. ET kinetics for the His50 and His81 mutants are a factor of 300 different, while the donor-acceptor separation is nearly identical. PATHWAY calculations corroborate these measurements and highlight the importance of structural detail of the intervening protein matrix.

In Chapter 4, the distance dependence of ET through water bridges was measured. Photoinduced ET measurements in aqueous glasses at 77 K show that water is a poor medium for ET. Luminescence decay and quantum yield data were analyzed in the context of a quenching model that accounts for the exponential distance dependence of ET, the distance distribution of donors and acceptors embedded in the glass and the excluded volumes generated by the finite sizes of the donors and acceptors.

In Chapter 5, the pH-dependent excited state dynamics of ruthenium-modified amino acids were measured. The [Ru(bpy)_(3)] ^(2+) chromophore was linked to amino acids via an amide linkage. Protonation of the amide oxygen effectively quenches the excited state. In addition. time-resolved and steady-state luminescence data reveal that nonradiative rates are very sensitive to the protonation state and the structure of the amino acid moiety.