26 resultados para Exponential integrators

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the following singularly perturbed linear two-point boundary-value problem:

Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)

By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)

Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.

A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.

Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of this thesis a study of the effect of the longitudinal distribution of optical intensity and electron density on the static and dynamic behavior of semiconductor lasers is performed. A static model for above threshold operation of a single mode laser, consisting of multiple active and passive sections, is developed by calculating the longitudinal optical intensity distribution and electron density distribution in a self-consistent manner. Feedback from an index and gain Bragg grating is included, as well as feedback from discrete reflections at interfaces and facets. Longitudinal spatial holeburning is analyzed by including the dependence of the gain and the refractive index on the electron density. The mechanisms of spatial holeburning in quarter wave shifted DFB lasers are analyzed. A new laser structure with a uniform optical intensity distribution is introduced and an implementation is simulated, resulting in a large reduction of the longitudinal spatial holeburning effect.

A dynamic small-signal model is then developed by including the optical intensity and electron density distribution, as well as the dependence of the grating coupling coefficients on the electron density. Expressions are derived for the intensity and frequency noise spectrum, the spontaneous emission rate into the lasing mode, the linewidth enhancement factor, and the AM and FM modulation response. Different chirp components are identified in the FM response, and a new adiabatic chirp component is discovered. This new adiabatic chirp component is caused by the nonuniform longitudinal distributions, and is found to dominate at low frequencies. Distributed feedback lasers with partial gain coupling are analyzed, and it is shown how the dependence of the grating coupling coefficients on the electron density can result in an enhancement of the differential gain with an associated enhancement in modulation bandwidth and a reduction in chirp.

In the second part, spectral characteristics of passively mode-locked two-section multiple quantum well laser coupled to an external cavity are studied. Broad-band wavelength tuning using an external grating is demonstrated for the first time in passively mode-locked semiconductor lasers. A record tuning range of 26 nm is measured, with pulse widths of typically a few picosecond and time-bandwidth products of more than 10 times the transform limit. It is then demonstrated that these large time-bandwidth products are due to a strong linear upchirp, by performing pulse compression by a factor of 15 to a record pulse widths as low 320 fs.

A model for pulse propagation through a saturable medium with self-phase-modulation, due to the a-parameter, is developed for quantum well material, including the frequency dependence of the gain medium. This model is used to simulate two-section devices coupled to an external cavity. When no self-phase-modulation is present, it is found that the pulses are asymmetric with a sharper rising edge, that the pulse tails have an exponential behavior, and that the transform limit is 0.3. Inclusion of self-phase-modulation results in a linear upchirp imprinted on the pulse after each round-trip. This linear upchirp is due to a combination of self-phase-modulation in a gain section and absorption of the leading edge of the pulse in the saturable absorber.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We know from the CMB and observations of large-scale structure that the universe is extremely flat, homogenous, and isotropic. The current favored mechanism for generating these characteristics is inflation, a theorized period of exponential expansion of the universe that occurred shortly after the Big Bang. Most theories of inflation generically predict a background of stochastic gravitational waves. These gravitational waves should leave their unique imprint on the polarization of the CMB via Thompson scattering. Scalar perturbations of the metric will cause a pattern of polarization with no curl (E-mode). Tensor perturbations (gravitational waves) will cause a unique pattern of polarization on the CMB that includes a curl component (B-mode). A measurement of the ratio of the tensor to scalar perturbations (r) tells us the energy scale of inflation. Recent measurements by the BICEP2 team detect the B-mode spectrum with a tensor-to-scalar ratio of r = 0.2 (+0.05, −0.07). An independent confirmation of this result is the next step towards understanding the inflationary universe.

This thesis describes my work on a balloon-borne polarimeter called SPIDER, which is designed to illuminate the physics of the early universe through measurements of the cosmic microwave background polarization. SPIDER consists of six single-frequency, on-axis refracting telescopes contained in a shared-vacuum liquid-helium cryostat. Its large format arrays of millimeter-wave detectors and tight control of systematics will give it unprecedented sensitivity. This thesis describes how the SPIDER detectors are characterized and calibrated for flight, as well as how the systematics requirements for the SPIDER system are simulated and measured.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detailed pulsed neutron measurements have been performed in graphite assemblies ranging in size from 30.48 cm x 38.10 cm x 38.10 cm to 91.44 cm x 66.67 cm x 66.67 cm. Results of the measurement have been compared to a modeled theoretical computation.

In the first set of experiments, we measured the effective decay constant of the neutron population in ten graphite stacks as a function of time after the source burst. We found the decay to be non-exponential in the six smallest assemblies, while in three larger assemblies the decay was exponential over a significant portion of the total measuring interval. The decay in the largest stack was exponential over the entire ten millisecond measuring interval. The non-exponential decay mode occurred when the effective decay constant exceeded 1600 sec^( -1).

In a second set of experiments, we measured the spatial dependence of the neutron population in four graphite stacks as a function of time after the source pulse. By doing an harmonic analysis of the spatial shape of the neutron distribution, we were able to compute the effective decay constants of the first two spatial modes. In addition, we were able to compute the time dependent effective wave number of neutron distribution in the stacks.

Finally, we used a Laplace transform technique and a simple modeled scattering kernel to solve a diffusion equation for the time and energy dependence of the neutron distribution in the graphite stacks. Comparison of these theoretical results with the results of the first set of experiments indicated that more exact theoretical analysis would be required to adequately describe the experiments.

The implications of our experimental results for the theory of pulsed neutron experiments in polycrystalline media are discussed in the last chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I of this thesis deals with 3 topics concerning the luminescence from bound multi-exciton complexes in Si. Part II presents a model for the decay of electron-hole droplets in pure and doped Ge.

Part I.

We present high resolution photoluminescence data for Si doped With Al, Ga, and In. We observe emission lines due to recombination of electron-hole pairs in bound excitons and satellite lines which have been interpreted in terms of complexes of several excitons bound to an impurity. The bound exciton luminescence in Si:Ga and Si:Al consists of three emission lines due to transitions from the ground state and two low lying excited states. In Si:Ga, we observe a second triplet of emission lines which precisely mirror the triplet due to the bound exciton. This second triplet is interpreted as due to decay of a two exciton complex into the bound exciton. The observation of the second complete triplet in Si:Ga conclusively demonstrates that more than one exciton will bind to an impurity. Similar results are found for Si:Al. The energy of the lines show that the second exciton is less tightly bound than the first in Si:Ga. Other lines are observed at lower energies. The assumption of ground state to ground-state transitions for the lower energy lines is shown to produce a complicated dependence of binding energy of the last exciton on the number of excitons in a complex. No line attributable to the decay of a two exciton complex is observed in Si:In.

We present measurements of the bound exciton lifetimes for the four common acceptors in Si and for the first two bound multi-exciton complexes in Si:Ga and Si:Al. These results are shown to be in agreement with a calculation by Osbourn and Smith of Auger transition rates for acceptor bound excitons in Si. Kinetics determine the relative populations of complexes of various sizes and work functions, at temperatures which do not allow them to thermalize with respect to one another. It is shown that kinetic limitations may make it impossible to form two-exciton complexes in Si:In from a gas of free excitons.

We present direct thermodynamic measurements of the work functions of bound multi-exciton complexes in Al, B, P and Li doped Si. We find that in general the work functions are smaller than previously believed. These data remove one obstacle to the bound multi-exciton complex picture which has been the need to explain the very large apparent work functions for the larger complexes obtained by assuming that some of the observed lines are ground-state to ground-state transitions. None of the measured work functions exceed that of the electron-hole liquid.

Part II.

A new model for the decay of electron-hole-droplets in Ge is presented. The model is based on the existence of a cloud of droplets within the crystal and incorporates exciton flow among the drops in the cloud and the diffusion of excitons away from the cloud. It is able to fit the experimental luminescence decays for pure Ge at different temperatures and pump powers while retaining physically reasonable parameters for the drops. It predicts the shrinkage of the cloud at higher temperatures which has been verified by spatially and temporally resolved infrared absorption experiments. The model also accounts for the nearly exponential decay of electron-hole-droplets in lightly doped Ge at higher temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conduction through TiO2 films of thickness 100 to 450 Å have been investigated. The samples were prepared by either anodization of Ti evaporation of TiO2, with Au or Al evaporated for contacts. The anodized samples exhibited considerable hysteresis due to electrical forming, however it was possible to avoid this problem with the evaporated samples from which complete sets of experimental results were obtained and used in the analysis. Electrical measurements included: the dependence of current and capacitance on dc voltage and temperature; the dependence of capacitance and conductance on frequency and temperature; and transient measurements of current and capacitance. A thick (3000 Å) evaporated TiO2 film was used for measuring the dielectric constant (27.5) and the optical dispersion, the latter being similar to that for rutile. An electron transmission diffraction pattern of a evaporated film indicated an essentially amorphous structure with a short range order that could be related to rutile. Photoresponse measurements indicated the same band gap of about 3 ev for anodized and evaporated films and reduced rutile crystals and gave the barrier energies at the contacts.

The results are interpreted in a self consistent manner by considering the effect of a large impurity concentration in the films and a correspondingly large ionic space charge. The resulting potential profile in the oxide film leads to a thermally assisted tunneling process between the contacts and the interior of the oxide. A general relation is derived for the steady state current through structures of this kind. This in turn is expressed quantitatively for each of two possible limiting types of impurity distributions, where one type gives barriers of an exponential shape and leads to quantitative predictions in c lose agreement with the experimental results. For films somewhat greater than 100 Å, the theory is formulated essentially in terms of only the independently measured barrier energies and a characteristic parameter of the oxide that depends primarily on the maximum impurity concentration at the contacts. A single value of this parameter gives consistent agreement with the experimentally observed dependence of both current and capacitance on dc voltage and temperature, with the maximum impurity concentration found to be approximately the saturation concentration quoted for rutile. This explains the relative insensitivity of the electrical properties of the films on the exact conditions of formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject of this thesis is electronic coupling in donor-bridge-acceptor systems. In Chapter 2, ET properties of cyanide-bridged dinuclear ruthenium complexes were investigated. The strong interaction between the mixed-valent ruthenium centers leads to intense metal-to-metal charge transfer bands (MMCT). Hush analysis of the MMCT absorption bands yields the electronic-coupling strength between the metal centers (H_(AB)) and the total reorganization energy (λ). Comparison of ET kinetics to calculated rates shows that classical ET models fail to account for the observed kinetics and nuclear tunneling must be considered.

In Chapter 3, ET rates were measured in four ruthenium-modified highpotential iron-sulfur proteins (HiPIP), which were modified at position His50, His81, His42 and His18, respectively. ET kinetics for the His50 and His81 mutants are a factor of 300 different, while the donor-acceptor separation is nearly identical. PATHWAY calculations corroborate these measurements and highlight the importance of structural detail of the intervening protein matrix.

In Chapter 4, the distance dependence of ET through water bridges was measured. Photoinduced ET measurements in aqueous glasses at 77 K show that water is a poor medium for ET. Luminescence decay and quantum yield data were analyzed in the context of a quenching model that accounts for the exponential distance dependence of ET, the distance distribution of donors and acceptors embedded in the glass and the excluded volumes generated by the finite sizes of the donors and acceptors.

In Chapter 5, the pH-dependent excited state dynamics of ruthenium-modified amino acids were measured. The [Ru(bpy)_(3)] ^(2+) chromophore was linked to amino acids via an amide linkage. Protonation of the amide oxygen effectively quenches the excited state. In addition. time-resolved and steady-state luminescence data reveal that nonradiative rates are very sensitive to the protonation state and the structure of the amino acid moiety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1-6 MeV electron flux at 1 AU has been measured for the time period October 1972 to December 1977 by the Caltech Electron/Isotope Spectrometers on the IMP-7 and IMP-8 satellites. The non-solar interplanetary electron flux reported here covered parts of five synodic periods. The 88 Jovian increases identified in these five synodic periods were classified by their time profiles. The fall time profiles were consistent with an exponential fall with τ ≈ 4-9 days. The rise time profiles displayed a systematic variation over the synodic period. Exponential rise time profiles with τ ≈ 1-3 days tended to occur in the time period before nominal connection, diffusive profiles predicted by the convection-diffusion model around nominal connection, and abrupt profiles after nominal connection.

The times of enhancements in the magnetic field, │B│, at 1 AU showed a better correlation than corotating interaction regions (CIR's) with Jovian increases and other changes in the electron flux at 1 AU, suggesting that │B│ enhancements indicate the times that barriers to electron propagation pass Earth. Time sequences of the increases and decreases in the electron flux at 1 AU were qualitatively modeled by using the times that CIR's passed Jupiter and the times that │B│ enhancements passed Earth.

The electron data observed at 1 AU were modeled by using a convection-diffusion model of Jovian electron propagation. The synodic envelope formed by the maxima of the Jovian increases was modeled by the envelope formed by the predicted intensities at a time less than that needed to reach equilibrium. Even though the envelope shape calculated in this way was similar to the observed envelope, the required diffusion coefficients were not consistent with a diffusive process.

Three Jovian electron increases at 1 AU for the 1974 synodic period were fit with rise time profiles calculated from the convection-diffusion model. For the fits without an ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 1.0 - 2.5 x 1021 cm2/sec and ky = 1.6 - 2.0 x 1022 cm2/sec. For the fits that included the ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 0.4 - 1.0 x 1021 cm2/sec and ky = 0.8 - 1.3 x 1022 cm2/sec.