10 resultados para repertory grid technique

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.

Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light microscopy has been one of the most common tools in biological research, because of its high resolution and non-invasive nature of the light. Due to its high sensitivity and specificity, fluorescence is one of the most important readout modes of light microscopy. This thesis presents two new fluorescence microscopic imaging techniques: fluorescence optofluidic microscopy and fluorescent Talbot microscopy. The designs of the two systems are fundamentally different from conventional microscopy, which makes compact and portable devices possible. The components of the devices are suitable for mass-production, making the microscopic imaging system more affordable for biological research and clinical diagnostics.

Fluorescence optofluidic microscopy (FOFM) is capable of imaging fluorescent samples in fluid media. The FOFM employs an array of Fresnel zone plates (FZP) to generate an array of focused light spots within a microfluidic channel. As a sample flows through the channel and across the array of focused light spots, a filter-coated CMOS sensor collects the fluorescence emissions. The collected data can then be processed to render a fluorescence microscopic image. The resolution, which is determined by the focused light spot size, is experimentally measured to be 0.65 μm.

Fluorescence Talbot microscopy (FTM) is a fluorescence chip-scale microscopy technique that enables large field-of-view (FOV) and high-resolution imaging. The FTM method utilizes the Talbot effect to project a grid of focused excitation light spots onto the sample. The sample is placed on a filter-coated CMOS sensor chip. The fluorescence emissions associated with each focal spot are collected by the sensor chip and are composed into a sparsely sampled fluorescence image. By raster scanning the Talbot focal spot grid across the sample and collecting a sequence of sparse images, a filled-in high-resolution fluorescence image can be reconstructed. In contrast to a conventional microscope, a collection efficiency, resolution, and FOV are not tied to each other for this technique. The FOV of FTM is directly scalable. Our FTM prototype has demonstrated a resolution of 1.2 μm, and the collection efficiency equivalent to a conventional microscope objective with a 0.70 N.A. The FOV is 3.9 mm × 3.5 mm, which is 100 times larger than that of a 20X/0.40 N.A. conventional microscope objective. Due to its large FOV, high collection efficiency, compactness, and its potential for integration with other on-chip devices, FTM is suitable for diverse applications, such as point-of-care diagnostics, large-scale functional screens, and long-term automated imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microwave noise emission at the harmonics of the electron cyclotron frequency from the magnetized plasma column of a Penning discharge is investigated experimentally. The harmonic emission spectrum is observed using oxygen gas in a variety of discharge configurations. It is found that grid stabilization of the plasma column has very little effect on the emission spectrum. Measurements of the shape and location of the harmonic emission lines are described in detail. On the basis of a microwave interferometer measurement of the electron density, it is concluded that the existence of a hybrid layer somewhere on the plasma column is a necessary condition for the observation of harmonic emission. The relaxation time and the cathode voltage dependence of the harmonic emission are investigated using a pulse modulation technique. It is found that the emission intensity increases rapidly with the magnitude of the cathode voltage and that the relaxation time decreases with increasing neutral gas pressure. High intensity nonharmonic radiation is observed and identified as resulting from a beam-plasma wave instability thereby eliminating the same instability as a possible source of the harmonic emission. It is found that the collective experimental results are in reasonable agreement with the single particle electrostatic radiation theory of Canobbio and Croci.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluvial systems form landscapes and sedimentary deposits with a rich hierarchy of structures that extend from grain- to valley scale. Large-scale pattern formation in fluvial systems is commonly attributed to forcing by external factors, including climate change, tectonic uplift, and sea-level change. Yet over geologic timescales, rivers may also develop large-scale erosional and depositional patterns that do not bear on environmental history. This dissertation uses a combination of numerical modeling and topographic analysis to identify and quantify patterns in river valleys that form as a consequence of river meandering alone, under constant external forcing. Chapter 2 identifies a numerical artifact in existing, grid-based models that represent the co-evolution of river channel migration and bank strength over geologic timescales. A new, vector-based technique for bank-material tracking is shown to improve predictions for the evolution of meander belts, floodplains, sedimentary deposits formed by aggrading channels, and bedrock river valleys, particularly when spatial contrasts in bank strength are strong. Chapters 3 and 4 apply this numerical technique to establishing valley topography formed by a vertically incising, meandering river subject to constant external forcing—which should serve as the null hypothesis for valley evolution. In Chapter 3, this scenario is shown to explain a variety of common bedrock river valley types and smaller-scale features within them—including entrenched channels, long-wavelength, arcuate scars in valley walls, and bedrock-cored river terraces. Chapter 4 describes the age and geometric statistics of river terraces formed by meandering with constant external forcing, and compares them to terraces in natural river valleys. The frequency of intrinsic terrace formation by meandering is shown to reflect a characteristic relief-generation timescale, and terrace length is identified as a key criterion for distinguishing these terraces from terraces formed by externally forced pulses of vertical incision. In a separate study, Chapter 5 utilizes image and topographic data from the Mars Reconnaissance Orbiter to quantitatively identify spatial structures in the polar layered deposits of Mars, and identifies sequences of beds, consistently 1-2 meters thick, that have accumulated hundreds of kilometers apart in the north polar layered deposits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An analytic technique is developed that couples to finite difference calculations to extend the results to arbitrary distance. Finite differences and the analytic result, a boundary integral called two-dimensional Kirchhoff, are applied to simple models and three seismological problems dealing with data. The simple models include a thorough investigation of the seismologic effects of a deep continental basin. The first problem is explosions at Yucca Flat, in the Nevada test site. By modeling both near-field strong-motion records and teleseismic P-waves simultaneously, it is shown that scattered surface waves are responsible for teleseismic complexity. The second problem deals with explosions at Amchitka Island, Alaska. The near-field seismograms are investigated using a variety of complex structures and sources. The third problem involves regional seismograms of Imperial Valley, California earthquakes recorded at Pasadena, California. The data are shown to contain evidence of deterministic structure, but lack of more direct measurements of the structure and possible three-dimensional effects make two-dimensional modeling of these data difficult.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nuclear resonant reaction 19F(ρ,αγ)16O has been used to perform depth-sensitive analyses of fluorine in lunar samples and carbonaceous chondrites. The resonance at 0.83 MeV (center-of-mass) in this reaction is utilized to study fluorine surface films, with particular interest paid to the outer micron of Apollo 15 green glass, Apollo 17 orange glass, and lunar vesicular basalts. These results are distinguished from terrestrial contamination, and are discussed in terms of a volcanic origin for the samples of interest. Measurements of fluorine in carbonaceous chondrites are used to better define the solar system fluorine abundance. A technique for measurement of carbon on solid surfaces with applications to direct quantitative analysis of implanted solar wind carbon in lunar samples is described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The resonant nuclear reaction 19F(p,αy)16O has been used to perform depth-sensitive analyses for both fluorine and hydrogen in solid samples. The resonance at 0.83 MeV (center-of-mass) in this reaction has been applied to the measurement of the distribution of trapped solar protons in lunar samples to depths of ~1/2µm. These results are interpreted in terms of a redistribution of the implanted H which has been influenced by heavy radiation damage in the surface region. Fluorine determinations have been performed in a 1-µm surface layer on lunar and meteoritic samples using the same 19F(p,αy)16O resonance. The measurement of H depth distributions has also been used to study the hydration of terrestrial obsidian, a phenomenon of considerable archaeological interest as a means of dating obsidian artifacts. Additional applications of this type of technique are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A technique is developed for the design of lenses for transitioning TEM waves between conical and/or cylindrical transmission lines, ideally with no reflection or distortion of the waves. These lenses utilize isotropic but inhomogeneous media and are based on a solution of Maxwell's equations instead of just geometrical optics. The technique employs the expression of the constitutive parameters, ɛ and μ, plus Maxwell's equations, in a general orthogonal curvilinear coordinate system in tensor form, giving what we term as formal quantities. Solving the problem for certain types of formal constitutive parameters, these are transformed to give ɛ and μ as functions of position. Several examples of such lenses are considered in detail.