995 resultados para cavity theory
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The field of cavity-optomechanics explores the interaction of light with sound in an ever increasing array of devices. This interaction allows the mechanical system to be both sensed and controlled by the optical system, opening up a wide variety of experiments including the cooling of the mechanical resonator to its quantum mechanical ground state and the squeezing of the optical field upon interaction with the mechanical resonator, to name two.
In this work we explore two very different systems with different types of optomechanical coupling. The first system consists of two microdisk optical resonators stacked on top of each other and separated by a very small slot. The interaction of the disks causes their optical resonance frequencies to be extremely sensitive to the gap between the disks. By careful control of the gap between the disks, the optomechanical coupling can be made to be quadratic to first order which is uncommon in optomechanical systems. With this quadratic coupling the light field is now sensitive to the energy of the mechanical resonator and can directly control the potential energy trapping the mechanical motion. This ability to directly control the spring constant without modifying the energy of the mechanical system, unlike in linear optomechanical coupling, is explored.
Next, the bulk of this thesis deals with a high mechanical frequency optomechanical crystal which is used to coherently convert photons between different frequencies. This is accomplished via the engineered linear optomechanical coupling in these devices. Both classical and quantum systems utilize the interaction of light and matter across a wide range of energies. These systems are often not naturally compatible with one another and require a means of converting photons of dissimilar wavelengths to combine and exploit their different strengths. Here we theoretically propose and experimentally demonstrate coherent wavelength conversion of optical photons using photon-phonon translation in a cavity-optomechanical system. For an engineered silicon optomechanical crystal nanocavity supporting a 4 GHz localized phonon mode, optical signals in a 1.5 MHz bandwidth are coherently converted over a 11.2 THz frequency span between one cavity mode at wavelength 1460 nm and a second cavity mode at 1545 nm with a 93% internal (2% external) peak efficiency. The thermal and quantum limiting noise involved in the conversion process is also analyzed and, in terms of an equivalent photon number signal level, are found to correspond to an internal noise level of only 6 and 4 times 10x^-3 quanta, respectively.
We begin by developing the requisite theoretical background to describe the system. A significant amount of time is then spent describing the fabrication of these silicon nanobeams, with an emphasis on understanding the specifics and motivation. The experimental demonstration of wavelength conversion is then described and analyzed. It is determined that the method of getting photons into the cavity and collected from the cavity is a fundamental limiting factor in the overall efficiency. Finally, a new coupling scheme is designed, fabricated, and tested that provides a means of coupling greater than 90% of photons into and out of the cavity, addressing one of the largest obstacles with the initial wavelength conversion experiment.
Resumo:
This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.
Resumo:
针对啁啾脉冲放大技术建成的钛宝石激光装置,提出一种获得高重复率激光脉冲列的方法.通过改变钛宝石再生放大器中泡克耳斯盒电光开关的传统工作模式,使得腔内放大的脉冲从某特定时刻起,每当在腔内往返一次就以一定的倒出比例(倒出率)倒出腔内脉冲能量的一部分,从而可以在有限的时间段内产生高重复率的啁啾激光脉冲列.基于Franz-Nodvik放大理论,建立了该高重复率再生放大器的理论模型,通过数值计算,系统地分析了初始增益、倒出时刻、倒出率对输出的脉冲序列的影响.在抽运功率为35mJ、倒出率为1/2的实验条件下,通过腔外
Resumo:
The origin of beam disparity in emittance and betatron oscillation orbits, in and out of the polarization plane of the drive laser of laser-plasma accelerators, is explained in terms of betatron oscillations driven by the laser field. As trapped electrons accelerate, they move forward and interact with the laser pulse. For the bubble regime, a simple model is presented to describe this interaction in terms of a harmonic oscillator with a driving force from the laser and a restoring force from the plasma wake field. The resulting beam oscillations in the polarization plane, with period approximately the wavelength of the driving laser, increase emittance in that plane and cause microbunching of the beam. These effects are observed directly in 3D particle-in-cell simulations.
Resumo:
This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.
The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
Three separate topics, each stimulated by experiments, are treated theoretically in this dessertation: isotopic effects of ozone, electron transfer at interfaces, and intramolecular directional electron transfer in a supramolecular system.
The strange mass-independent isotope effect for the enrichment of ozone, which has been a puzzle in the literature for some 20 years, and the equally puzzling unconventional strong mass-dependent effect of individual reaction rate constants are studied as different aspects of a symmetry-driven behavior. A statistical (RRKM-based) theory with a hindered-rotor transition state is used. The individual rate constant ratios of recombination reactions at low pressures are calculated using the theory involving (1) small deviation from the statistical density of states for symmetric isotopomers, and (2) weak collisions for deactivation of the vibrationally excited ozone molecules. The weak collision and partitioning among exit channels play major roles in producing the large unconventional isotope effect in "unscrambled" systems. The enrichment studies reflect instead the non-statistical effect in "scrambled" systems. The theoretical results of low-pressure ozone enrichments and individual rate constant ratios obtained from these calculations are consistent with the corresponding experimental results. The isotopic exchange rate constant for the reaction ^(16)O + ^(18)O ^(18)O→+ ^(16)O ^(18)O + ^(18)O provides information on the nature of a variationally determined hindered-rotor transition state using experimental data at 130 K and 300 K. Pressure effects on the recombination rate constant, on the individual rate constant ratios and on the enrichments are also investigated. The theoretical results are consistent with the experimental data. The temperature dependence of the enrichment and rate constant ratios is also discussed, and experimental tests are suggested. The desirability of a more accurate potential energy surface for ozone in the transition state region is also noted.
Electron transfer reactions at semiconductor /liquid interfaces are studied using a tight-binding model for the semiconductors. The slab method and a z-transform method are employed in obtaining the tight-binding electronic structures of semiconductors having surfaces. The maximum electron transfer rate constants at Si/viologen^(2-/+) and InP /Me_(2)Fc^(+/O) interfaces are computed using the tight-binding type calculations for the solid and the extended-Huckel for the coupling to the redox agent at the interface. These electron transfer reactions are also studied using a free electron model for the semiconductor and the redox molecule, where Bardeen's method is adapted to calculate the coupling matrix element between the molecular and semiconductor electronic states. The calculated results for maximum rate constant of the electron transfer from the semiconductor bulk states are compared with the experimentally measured values of Lewis and coworkers, and are in reasonable agreement, without adjusting parameters. In the case of InP /liquid interface, the unusual current vs applied potential behavior is additionally interpreted, in part, by the presence of surface states.
Photoinduced electron transfer reactions in small supramolecular systems, such as 4-aminonaphthalimide compounds, are interesting in that there are, in principle, two alternative pathways (directions) for the electron transfer. The electron transfer, however, is unidirectional, as deduced from pH-dependent fluorescence quenching studies on different compounds. The role of electronic coupling matrix element and the charges in protonation are considered to explain the directionality of the electron transfer and other various results. A related mechanism is proposed to interpret the fluorescence behavior of similar molecules as fluorescent sensors of metal ions.