8 resultados para Exception

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to regulate gene expression is of central importance for the adaptability of living organisms to changes in their internal and external environment. At the transcriptional level, binding of transcription factors (TFs) in the vicinity of promoters can modulate the rate at which transcripts are produced, and as such play an important role in gene regulation. TFs with regulatory action at multiple promoters is the rule rather than the exception, with examples ranging from TFs like the cAMP receptor protein (CRP) in E. coli that regulates hundreds of different genes, to situations involving multiple copies of the same gene, such as on plasmids, or viral DNA. When the number of TFs heavily exceeds the number of binding sites, TF binding to each promoter can be regarded as independent. However, when the number of TF molecules is comparable to the number of binding sites, TF titration will result in coupling ("entanglement") between transcription of different genes. The last few decades have seen rapid advances in our ability to quantitatively measure such effects, which calls for biophysical models to explain these data. Here we develop a statistical mechanical model which takes the TF titration effect into account and use it to predict both the level of gene expression and the resulting correlation in transcription rates for a general set of promoters. To test these predictions experimentally, we create genetic constructs with known TF copy number, binding site affinities, and gene copy number; hence avoiding the need to use free fit parameters. Our results clearly prove the TF titration effect and that the statistical mechanical model can accurately predict the fold change in gene expression for the studied cases. We also generalize these experimental efforts to cover systems with multiple different genes, using the method of mRNA fluorescence in situ hybridization (FISH). Interestingly, we can use the TF titration affect as a tool to measure the plasmid copy number at different points in the cell cycle, as well as the plasmid copy number variance. Finally, we investigate the strategies of transcriptional regulation used in a real organism by analyzing the thousands of known regulatory interactions in E. coli. We introduce a "random promoter architecture model" to identify overrepresented regulatory strategies, such as TF pairs which coregulate the same genes more frequently than would be expected by chance, indicating a related biological function. Furthermore, we investigate whether promoter architecture has a systematic effect on gene expression by linking the regulatory data of E. coli to genome-wide expression censuses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Low Energy Telescopes on the Voyager spacecraft are used to measure the elemental composition (2 ≤ Z ≤ 28) and energy spectra (5 to 15 MeV /nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events are selected which have SEP abundance ratios approximately independent of energy/nucleon. The abundances for these events are compared from flare to flare and are compared to solar abundances from other sources: spectroscopy of the photosphere and corona, and solar wind measurements.

The selected SEP composition results may be described by an average composition plus a systematic flare-to-flare deviation about the average. For each of the four events, the ratios of the SEP abundances to the four-flare average SEP abundances are approximately monotonic functions of nuclear charge Z in the range 6 ≤ Z ≤ 28. An exception to this Z-dependent trend occurs for He, whose abundance relative to Si is nearly the same in all four events.

The four-flare average SEP composition is significantly different from the solar composition determined by photospheric spectroscopy: The elements C, N and O are depleted in SEPs by a factor of about five relative to the elements Na, Mg, Al, Si, Ca, Cr, Fe and Ni. For some elemental abundance ratios (e.g. Mg/O), the difference between SEP and photospheric results is persistent from flare to flare and is apparently not due to a systematic difference in SEP energy/nucleon spectra between the elements, nor to propagation effects which would result in a time-dependent abundance ratio in individual flare events.

The four-flare average SEP composition is in agreement with solar wind abundance results and with a number of recent coronal abundance measurements. The evidence for a common depletion of oxygen in SEPs, the corona and the solar wind relative to the photosphere suggests that the SEPs originate in the corona and that both the SEPs and solar wind sample a coronal composition which is significantly and persistently different from that of the photosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?

In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The topological phases of matter have been a major part of condensed matter physics research since the discovery of the quantum Hall effect in the 1980s. Recently, much of this research has focused on the study of systems of free fermions, such as the integer quantum Hall effect, quantum spin Hall effect, and topological insulator. Though these free fermion systems can play host to a variety of interesting phenomena, the physics of interacting topological phases is even richer. Unfortunately, there is a shortage of theoretical tools that can be used to approach interacting problems. In this thesis I will discuss progress in using two different numerical techniques to study topological phases.

Recently much research in topological phases has focused on phases made up of bosons. Unlike fermions, free bosons form a condensate and so interactions are vital if the bosons are to realize a topological phase. Since these phases are difficult to study, much of our understanding comes from exactly solvable models, such as Kitaev's toric code, as well as Levin-Wen and Walker-Wang models. We may want to study systems for which such exactly solvable models are not available. In this thesis I present a series of models which are not solvable exactly, but which can be studied in sign-free Monte Carlo simulations. The models work by binding charges to point topological defects. They can be used to realize bosonic interacting versions of the quantum Hall effect in 2D and topological insulator in 3D. Effective field theories of "integer" (non-fractionalized) versions of these phases were available in the literature, but our models also allow for the construction of fractional phases. We can measure a number of properties of the bulk and surface of these phases.

Few interacting topological phases have been realized experimentally, but there is one very important exception: the fractional quantum Hall effect (FQHE). Though the fractional quantum Hall effect we discovered over 30 years ago, it can still produce novel phenomena. Of much recent interest is the existence of non-Abelian anyons in FQHE systems. Though it is possible to construct wave functions that realize such particles, whether these wavefunctions are the ground state is a difficult quantitative question that must be answered numerically. In this thesis I describe progress using a density-matrix renormalization group algorithm to study a bilayer system thought to host non-Abelian anyons. We find phase diagrams in terms of experimentally relevant parameters, and also find evidence for a non-Abelian phase known as the "interlayer Pfaffian".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have measured sputtering yields and angular distributions of sputtered atoms from both the solid and liquid phases of gallium, indium, and the gallium-indium eutectic alloy. This was done by Rutherford backscattering analysis of graphite collector foils. The solid eutectic target shows a predominance of indium crystallites on its surface which have to be sputtered away before the composition of the sputtered atoms equals the bulk target composition. The size of the crystallites depends upon the conditions under which the alloy is frozen. The sputtering of the liquid eutectic alloy by 15 keV Ar+ results in a ratio of indium to gallium sputtering yields which is 28 times greater than would be expected from the target stoichiometry. Furthermore, the angular distribution of gallium is much more sharply peaked about the normal to the target surface than the indium distribution. When the incident Ar+ energy is increased to 25 keV, the gallium distribution broadens to the same shape as the indium distribution. With the exception of the sharp gallium distribution taken from the liquid eutectic at 15 keV, all angular distributions from liquid targets fit a cos2 θ function. An ion-scattering-spectroscopy analysis of the liquid eutectic alloy reveals a surface layer of almost pure indium. A thermodynamic explanation for this highly segregated layer is discussed. The liquid eutectic alloy provides us with a unique target system which allows us to estimate the fraction of sputtered material which comes from the first monolayer of the surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.

Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.

Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wave-theoretical analysis of acoustic and elastic waves refracted by a spherical boundary across which both velocity and density increase abruptly and thence either increase or decrease continuously with depth is formulated in terms of the general problem of waves generated at a steady point source and scattered by a radially heterogeneous spherical body. A displacement potential representation is used for the elastic problem that results in high frequency decoupling of P-SV motion in a spherically symmetric, radially heterogeneous medium. Through the application of an earth-flattening transformation on the radial solution and the Watson transform on the sum over eigenfunctions, the solution to the spherical problem for high frequencies is expressed as a Weyl integral for the corresponding half-space problem in which the effect of boundary curvature maps into an effective positive velocity gradient. The results of both analytical and numerical evaluation of this integral can be summarized as follows for body waves in the crust and upper mantle:

1) In the special case of a critical velocity gradient (a gradient equal and opposite to the effective curvature gradient), the critically refracted wave reduces to the classical head wave for flat, homogeneous layers.

2) For gradients more negative than critical, the amplitude of the critically refracted wave decays more rapidly with distance than the classical head wave.

3) For positive, null, and gradients less negative than critical, the amplitude of the critically refracted wave decays less rapidly with distance than the classical head wave, and at sufficiently large distances, the refracted wave can be adequately described in terms of ray-theoretical diving waves. At intermediate distances from the critical point, the spectral amplitude of the refracted wave is scalloped due to multiple diving wave interference.

These theoretical results applied to published amplitude data for P-waves refracted by the major crustal and upper mantle horizons (the Pg, P*, and Pn travel-time branches) suggest that the 'granitic' upper crust, the 'basaltic' lower crust, and the mantle lid all have negative or near-critical velocity gradients in the tectonically active western United States. On the other hand, the corresponding horizons in the stable eastern United States appear to have null or slightly positive velocity gradients. The distribution of negative and positive velocity gradients correlates closely with high heat flow in tectonic regions and normal heat flow in stable regions. The velocity gradients inferred from the amplitude data are generally consistent with those inferred from ultrasonic measurements of the effects of temperature and pressure on crustal and mantle rocks and probable geothermal gradients. A notable exception is the strong positive velocity gradient in the mantle lid beneath the eastern United States (2 x 10-3 sec-1), which appears to require a compositional gradient to counter the effect of even a small geothermal gradient.

New seismic-refraction data were recorded along a 800 km profile extending due south from the Canadian border across the Columbia Plateau into eastern Oregon. The source for the seismic waves was a series of 20 high-energy chemical explosions detonated by the Canadian government in Greenbush Lake, British Columbia. The first arrivals recorded along this profile are on the Pn travel-time branch. In northern Washington and central Oregon their travel time is described by T = Δ/8.0 + 7.7 sec, but in the Columbia Plateau the Pn arrivals are as much as 0.9 sec early with respect to this line. An interpretation of these Pn arrivals together with later crustal arrivals suggest that the crust under the Columbia Plateau is thinner by about 10 km and has a higher average P-wave velocity than the 35-km-thick, 62-km/sec crust under the granitic-metamorphic terrain of northern Washington. A tentative interpretation of later arrivals recorded beyond 500 km from the shots suggests that a thin 8.4-km/sec horizon may be present in the upper mantle beneath the Columbia Plateau and that this horizon may form the lid to a pronounced low-velocity zone extending to a depth of about 140 km.