6 resultados para Alton (Ill.)

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.

In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.

A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.

The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Underlying matter and light are their building blocks of tiny atoms and photons. The ability to control and utilize matter-light interactions down to the elementary single atom and photon level at the nano-scale opens up exciting studies at the frontiers of science with applications in medicine, energy, and information technology. Of these, an intriguing front is the development of quantum networks where N >> 1 single-atom nodes are coherently linked by single photons, forming a collective quantum entity potentially capable of performing quantum computations and simulations. Here, a promising approach is to use optical cavities within the setting of cavity quantum electrodynamics (QED). However, since its first realization in 1992 by Kimble et al., current proof-of-principle experiments have involved just one or two conventional cavities. To move beyond to N >> 1 nodes, in this thesis we investigate a platform born from the marriage of cavity QED and nanophotonics, where single atoms at ~100 nm near the surfaces of lithographically fabricated dielectric photonic devices can strongly interact with single photons, on a chip. Particularly, we experimentally investigate three main types of devices: microtoroidal optical cavities, optical nanofibers, and nanophotonic crystal based structures. With a microtoroidal cavity, we realized a robust and efficient photon router where single photons are extracted from an incident coherent state of light and redirected to a separate output with high efficiency. We achieved strong single atom-photon coupling with atoms located ~100 nm near the surface of a microtoroid, which revealed important aspects in the atom dynamics and QED of these systems including atom-surface interaction effects. We present a method to achieve state-insensitive atom trapping near optical nanofibers, critical in nanophotonic systems where electromagnetic fields are tightly confined. We developed a system that fabricates high quality nanofibers with high controllability, with which we experimentally demonstrate a state-insensitive atom trap. We present initial investigations on nanophotonic crystal based structures as a platform for strong atom-photon interactions. The experimental advances and theoretical investigations carried out in this thesis provide a framework for and open the door to strong single atom-photon interactions using nanophotonics for chip-integrated quantum networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During early stages of Drosophila development the heat shock response cannot be induced. It is reasoned that the adverse effects on cell cycle and cell growth brought about by Hsp70 induction must outweigh the beneficial aspects of Hsp70 induction in the early embryo. Although the Drosophila heat shock transcription factor (dHSF) is abundant in the early embryo, it does not enter the nucleus in response to heat shock. In older embryos and in cultured cells the factor is localized within the nucleus in an apparent trimeric structure that binds DNA with high affinity. The domain responsible for nuclear localization upon stress resides between residues 390 and 420 of the dHSF. Using that domain as bait in a yeast two-hybrid system we now report the identification and cloning of a nuclear transport protein Drosophila karyopherin-α3(dKap- α3). Biochemical methods demonstrate that the dKap-α3 protein binds specifically to the dHSF's nuclear localization sequence (NLS). Furthermore, the dKap-α3 protein does not associate with NLSs that contain point mutations which are not transported in vivo. Nuclear docking studies also demonstrate specific nuclear targeting of the NLS substrate by dKap-α3.Consistant with previous studies demonstrating that early Drosophila embryos are refractory to heat shock as a result of dHSF nuclear exclusion, we demonstrate that the early embryo is deficient in dKap-α3 protein through cycle 12. From cycle 13 onward the transport factor is present and the dHSF is localized within the nucleus thus allowing the embryo to respond to heat shock.

The pair-rule gene fushi tarazu (ftz) is a well-studied zygotic segmentation gene that is necessary for the development of the even-numbered parasegments in Drosophila melanogastor. During early embryogenesis, ftz is expressed in a characteristic pattern of seven stripes, one in each of the even-numbered parasegments. With a view to understand how ftz is transcriptionally regulated, cDNAs that encode transcription factors that bind to the zebra element of the ftz promoter have been cloned. Chapter Ill reports the cloning and characterization of the eDNA encoding zeb-1 (zebra element binding protein), a novel steroid receptor-like molecule that specifically binds to a key regulatory element of the ftz promoter. In transient transfection assays employing Drosophila tissue culture cells, it has been shown that zeb-1 as well as a truncated zeb-1 polypeptide (zeb480) that lacks the putative ligand binding domain function as sequencespecific trans-activators of the ftz gene.

The Oct factors are members of the POU family of transcription factors that are shown to play important roles during development in mammals. Chapter IV reports the eDNA cloning and expression of a Drosophila Oct transcription factor. Whole mount in-situ hybridization experiments revealed that the spatial expression patterns of this gene during embryonic development have not yet been observed for any other gene. In early embryogenesis, its transcripts are transiently expressed as a wide uniform band from 20-40% of the egg length, very similar to that of gap genes. This pattern progressively resolves into a series of narrower stripes followed by expression in fourteen stripes. Subsequently, transcripts from this gene are expressed in the central nervous system and the brain. When expressed in the yeast Saccharomyces cerevisiae, this Drosophila factor functions as a strong, octamer-dependent activator of transcription. The data strongly suggest possible functions for the Oct factor in pattern formation in Drosophila that might transcend the boundaries of genetically defined segmentation genes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.