21 resultados para Two-Level Optimization

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes the theoretical solution and experimental verification of phase conjugation via nondegenerate four-wave mixing in resonant media. The theoretical work models the resonant medium as a two-level atomic system with the lower state of the system being the ground state of the atom. Working initially with an ensemble of stationary atoms, the density matrix equations are solved by third-order perturbation theory in the presence of the four applied electro-magnetic fields which are assumed to be nearly resonant with the atomic transition. Two of the applied fields are assumed to be non-depleted counterpropagating pump waves while the third wave is an incident signal wave. The fourth wave is the phase conjugate wave which is generated by the interaction of the three previous waves with the nonlinear medium. The solution of the density matrix equations gives the local polarization of the atom. The polarization is used in Maxwell's equations as a source term to solve for the propagation and generation of the signal wave and phase conjugate wave through the nonlinear medium. Studying the dependence of the phase conjugate signal on the various parameters such as frequency, we show how an ultrahigh-Q isotropically sensitive optical filter can be constructed using the phase conjugation process.

In many cases the pump waves may saturate the resonant medium so we also present another solution to the density matrix equations which is correct to all orders in the amplitude of the pump waves since the third-order solution is correct only to first-order in each of the field amplitudes. In the saturated regime, we predict several new phenomena associated with degenerate four-wave mixing and also describe the ac Stark effect and how it modifies the frequency response of the filtering process. We also show how a narrow bandwidth optical filter with an efficiency greater than unity can be constructed.

In many atomic systems the atoms are moving at significant velocities such that the Doppler linewidth of the system is larger than the homogeneous linewidth. The latter linewidth dominates the response of the ensemble of stationary atoms. To better understand this case the density matrix equations are solved to third-order by perturbation theory for an atom of velocity v. The solution for the polarization is then integrated over the velocity distribution of the macroscopic system which is assumed to be a gaussian distribution of velocities since that is an excellent model of many real systems. Using the Doppler broadened system, we explain how a tunable optical filter can be constructed whose bandwidth is limited by the homogeneous linewidth of the atom while the tuning range of the filter extends over the entire Doppler profile.

Since it is a resonant system, sodium vapor is used as the nonlinear medium in our experiments. The relevant properties of sodium are discussed in great detail. In particular, the wavefunctions of the 3S and 3P states are analyzed and a discussion of how the 3S-3P transition models a two-level system is given.

Using sodium as the nonlinear medium we demonstrate an ultrahigh-Q optical filter using phase conjugation via nondegenerate four-wave mixing as the filtering process. The filter has a FWHM bandwidth of 41 MHz and a maximum efficiency of 4 x 10-3. However, our theoretical work and other experimental work with sodium suggest that an efficient filter with both gain and a narrower bandwidth should be quite feasible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the first part of this thesis, experiments utilizing an NMR phase interferometric concept are presented. The spinor character of two-level systems is explicitly demonstrated by using this concept. Following this is the presentation of an experiment which uses this same idea to measure relaxation times of off-diagonal density matrix elements corresponding to magnetic-dipole-forbidden transitions in a ^(13)C-^1H, AX spin system. The theoretical background for these experiments and the spin dynamics of the interferometry are discussed also.

The second part of this thesis deals with NMR dipolar modulated chemical shift spectroscopy, with which internuclear bond lengths and bond angles with respect to the chemical shift principal axis frame are determined from polycrystalline samples. Experiments using benzene and calcium formate verify the validity of the technique in heteronuclear (^(13)C-^1H) systems. Similar experiments on powdered trichloroacetic acid confirm the validity in homonuclear (^1H- ^1H) systems. The theory and spin dynamics are explored in detail, and the effects of a number of multiple pulse sequences are discussed.

The last part deals with an experiment measuring the ^(13)C chemical shift tensor in K_2Pt(CN)_4Br_(0.3) • 3H_2O, a one-dimensional conductor. The ^(13)C spectra are strongly affected by ^(14)N quadrupolar interactions via the ^(13)C - ^(14)N dipolar interaction. Single crystal rotation spectra are shown.

An appendix discussing the design, construction, and performance of a single-coil double resonance NMR sample probe is included.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.

Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.

However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.

Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Theoretical and experimental studies of a gas laser amplifier are presented, assuming the amplifier is operating with a saturating optical frequency signal. The analysis is primarily concerned with the effects of the gas pressure and the presence of an axial magnetic field on the characteristics of the amplifying medium. Semiclassical radiation theory is used, along with a density matrix description of the atomic medium which relates the motion of single atoms to the macroscopic observables. A two-level description of the atom, using phenomenological source rates and decay rates, forms the basis of our analysis of the gas laser medium. Pressure effects are taken into account to a large extent through suitable choices of decay rate parameters.

Two methods for calculating the induced polarization of the atomic medium are used. The first method utilizes a perturbation expansion which is valid for signal intensities which barely reach saturation strength, and it is quite general in applicability. The second method is valid for arbitrarily strong signals, but it yields tractable solutions only for zero magnetic field or for axial magnetic fields large enough such that the Zeeman splitting is much larger than the power broadened homogeneous linewidth of the laser transition. The effects of pressure broadening of the homogeneous spectral linewidth are included in both the weak-signal and strong-signal theories; however the effects of Zeeman sublevel-mixing collisions are taken into account only in the weak-signal theory.

The behavior of a He-Ne gas laser amplifier in the presence of an axial magnetic field has been studied experimentally by measuring gain and Faraday rotation of linearly polarized resonant laser signals for various values of input signal intensity, and by measuring nonlinearity - induced anisotropy for elliptically polarized resonant laser signals of various input intensities. Two high-gain transitions in the 3.39-μ region were used for study: a J = 1 to J = 2 (3s2 → 3p4) transition and a J = 1 to J = 1 (3s2 → 3p2) transition. The input signals were tuned to the centers of their respective resonant gain lines.

The experimental results agree quite well with corresponding theoretical expressions which have been developed to include the nonlinear effects of saturation strength signals. The experimental results clearly show saturation of Faraday rotation, and for the J = 1 t o J = 1 transition a Faraday rotation reversal and a traveling wave gain dip are seen for small values of axial magnetic field. The nonlinearity induced anisotropy shows a marked dependence on the gas pressure in the amplifier tube for the J = 1 to J = 2 transition; this dependence agrees with the predictions of the general perturbational or weak signal theory when allowances are made for the effects of Zeeman sublevel-mixing collisions. The results provide a method for measuring the upper (neon 3s2) level quadrupole moment decay rate, the dipole moment decay rates for the 3s2 → 3p4 and 3s2 → 3p2 transitions, and the effects of various types of collision processes on these decay rates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.

However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.

This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Regression analyses are performed on in vivo hemodialysis data for the transfer of creatinine, urea, uric acid and inorganic phosphate to determine the effects of variations in certain parameters on the efficiency of dialysis with a Kiil dialyzer. In calculating the mass transfer rates across the membrane, the effects of cell-plasma mass transfer kinetics are considered. The concept of the effective permeability coefficient for the red cell membrane is introduced to account for these effects. A discussion of the consequences of neglecting cell-plasma kinetics, as has been done to date in the literature, is presented.

A physical model for the Kiil dialyzer is presented in order to calculate the available membrane area for mass transfer, the linear blood and dialysate velocities, and other variables. The equations used to determine the independent variables of the regression analyses are presented. The potential dependent variables in the analyses are discussed.

Regression analyses were carried out considering overall mass-transfer coefficients, dialysances, relative dialysances, and relative permeabilities for each substance as the dependent variables. The independent variables were linear blood velocity, linear dialysate velocity, the pressure difference across the membrane, the elapsed time of dialysis, the blood hematocrit, and the arterial plasma concentrations of each substance transferred. The resulting correlations are tabulated, presented graphically, and discussed. The implications of these correlations are discussed from the viewpoint of a research investigator and from the viewpoint of patient treatment.

Recommendations for further experimental work are presented.

Part II

The interfacial structure of concurrent air-water flow in a two-inch diameter horizontal tube in the wavy flow regime has been measured using resistance wave gages. The median water depth, r.m.s. wave height, wave frequency, extrema frequency, and wave velocity have been measured as functions of air and water flow rates. Reynolds numbers, Froude numbers, Weber numbers, and bulk velocities for each phase may be calculated from these measurements. No theory for wave formation and propagation available in the literature was sufficient to describe these results.

The water surface level distribution generally is not adequately represented as a stationary Gaussian process. Five types of deviation from the Gaussian process function were noted in this work. The presence of the tube walls and the relatively large interfacial shear stresses precludes the use of simple statistical analyses to describe the interfacial structure. A detailed study of the behavior of individual fluid elements near the interface may be necessary to describe adequately wavy two-phase flow in systems similar to the one used in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence upon the basic viscous flow about two axisymmetric bodies of (i) freestream turbulence level and (ii) the injection of small amounts of a drag-reducing polymer (Polyox WSR 301) into the test model boundary layer was investigated by the schlieren flow visualization technique. The changes in the type and occurrence of cavitation inception caused by the subsequent modifications in the viscous flow were studied. A nuclei counter using the holographic technique was built to monitor freestream nuclei populations and a few preliminary tests investigating the consequences of different populations on cavitation inception were carried out.

Both test models were observed to have a laminar separation over their respective test Reynolds number ranges. The separation on one test model was found to be insensitive to freestream turbulence levels of up to 3.75 percent. The second model was found to be very susceptible having its critical velocity reduced from 30 feet per second at a 0.04 percent turbulence level to 10 feet per second at a 3.75 percent turbulence level. Cavitation tests on both models at the lowest turbulence level showed the value of the incipient cavitation number and the type of cavitation were controlled by the presence of the laminar separation. Cavitation tests on the second model at 0.65 percent turbulence level showed no change in the inception index, but the appearance of the developed cavitation was altered.

The presence of Polyox in the boundary layer resulted in a cavitation suppression comparable to that found by other investigators. The elimination of the normally occurring laminar separation on these bodies by a polymer-induced instability in the laminar boundary layer was found to be responsible for the suppression of inception.

Freestream nuclei populations at test conditions were measured and it was found that if there were many freestream gas bubbles the normally present laminar separation was elminated and travelling bubble type cavitation occurred - the value of the inception index then depended upon the nuclei population. In cases where the laminar separation was present it was found that the value of the inception index was insensitive to the free stream nuclei populations.