15 resultados para New Space Vector Modulation
em CaltechTHESIS
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
Let L be the algebra of all linear transformations on an n-dimensional vector space V over a field F and let A, B, ƐL. Let Ai+1 = AiB - BAi, i = 0, 1, 2,…, with A = Ao. Let fk (A, B; σ) = A2K+1 - σ1A2K-1 + σ2A2K-3 -… +(-1)KσKA1 where σ = (σ1, σ2,…, σK), σi belong to F and K = k(k-1)/2. Taussky and Wielandt [Proc. Amer. Math. Soc., 13(1962), 732-735] showed that fn(A, B; σ) = 0 if σi is the ith elementary symmetric function of (β4- βs)2, 1 ≤ r ˂ s ≤ n, i = 1, 2, …, N, with N = n(n-1)/2, where β4 are the characteristic roots of B. In this thesis we discuss relations involving fk(X, Y; σ) where X, Y Ɛ L and 1 ≤ k ˂ n. We show: 1. If F is infinite and if for each X Ɛ L there exists σ so that fk(A, X; σ) = 0 where 1 ≤ k ˂ n, then A is a scalar transformation. 2. If F is algebraically closed, a necessary and sufficient condition that there exists a basis of V with respect to which the matrices of A and B are both in block upper triangular form, where the blocks on the diagonals are either one- or two-dimensional, is that certain products X1, X2…Xr belong to the radical of the algebra generated by A and B over F, where Xi has the form f2(A, P(A,B); σ), for all polynomials P(x, y). We partially generalize this to the case where the blocks have dimensions ≤ k. 3. If A and B generate L, if the characteristic of F does not divide n and if there exists σ so that fk(A, B; σ) = 0, for some k with 1 ≤ k ˂ n, then the characteristic roots of B belong to the splitting field of gk(w; σ) = w2K+1 - σ1w2K-1 + σ2w2K-3 - …. +(-1)K σKw over F. We use this result to prove a theorem involving a generalized form of property L [cf. Motzkin and Taussky, Trans. Amer. Math. Soc., 73(1952), 108-114]. 4. Also we give mild generalizations of results of McCoy [Amer. Math. Soc. Bull., 42(1936), 592-600] and Drazin [Proc. London Math. Soc., 1(1951), 222-231].
Solar flare particle propagation--comparison of a new analytic solution with spacecraft measurements
Resumo:
A new analytic solution has been obtained to the complete Fokker-Planck equation for solar flare particle propagation including the effects of convection, energy-change, corotation, and diffusion with ĸr = constant and ĸƟ ∝ r2. It is assumed that the particles are injected impulsively at a single point in space, and that a boundary exists beyond which the particles are free to escape. Several solar flare particle events have been observed with the Caltech Solar and Galactic Cosmic Ray Experiment aboard OGO-6. Detailed comparisons of the predictions of the new solution with these observations of 1-70 MeV protons show that the model adequately describes both the rise and decay times, indicating that ĸr = constant is a better description of conditions inside 1 AU than is ĸr ∝ r. With an outer boundary at 2.7 AU, a solar wind velocity of 400 km/sec, and a radial diffusion coefficient ĸr ≈ 2-8 x 1020 cm2/sec, the model gives reasonable fits to the time-profile of 1-10 MeV protons from "classical" flare-associated events. It is not necessary to invoke a scatter-free region near the sun in order to reproduce the fast rise times observed for directly-connected events. The new solution also yields a time-evolution for the vector anisotropy which agrees well with previously reported observations.
In addition, the new solution predicts that, during the decay phase, a typical convex spectral feature initially at energy To will move to lower energies at an exponential rate given by TKINK = Toexp(-t/ƬKINK). Assuming adiabatic deceleration and a boundary at 2.7 AU, the solution yields ƬKINK ≈ 100h, which is faster than the measured ~200h time constant and slower than the adiabatic rate of ~78h at 1 AU. Two possible explanations are that the boundary is at ~5 AU or that some other energy-change process is operative.
Resumo:
Neurons in the primate lateral intraparietal area (area LIP) carry visual, saccade-related and eye position activities. The visual and saccade activities are anchored in a retinotopic framework and the overall response magnitude is modulated by eye position. It was proposed that the modulation by eye position might be the basis of a distributed coding of target locations in a head-centered space. Other recording studies demonstrated that area LIP is involved in oculomotor planning. These results overall suggest that area LIP transforms sensory information for motor functions. In this thesis I further explore the role of area LIP in processing saccadic eye movements by observing the effects of reversible inactivation of this area. Macaque monkeys were trained to do visually guided and memory saccades and a double saccade task to examine the use of eye position signal. Finally, by intermixing visual saccades with trials in which two targets were presented at opposite sides of the fixation point, I examined the behavior of visual extinction.
In chapter 2, I will show that lesion of area LIP results in increased latency of contralesional visual and memory saccades. Contralesional memory saccades are also hypometric and slower in velocity. Moreover, the impairment of memory saccades does not vary with the duration of the delay period. This suggests that the oculomotor deficits observed after inactivation of area LIP is not due to the disruption of spatial memory.
In chapter 3, I will show that lesion of area LIP does not severely affect the processing of spontaneous eye movement. However, the monkeys made fewer contralesional saccades and tended to confine their gaze to the ipsilesional field after inactivation of area LIP. On the other hand, lesion of area LIP results in extinction of the contralesional stimulus. When the initial fixation position was varied so that the retinal and spatial locations of the targets could be dissociated, it was found that the extinction behavior could best be described in a head-centered coordinate.
In chapter 4, I will show that inactivation of area LIP disrupts the use of eye position signal to compute the second movement correctly in the double saccade task. If the first saccade steps into the contralesional field, the error rate and latency of the second saccade are both increased. Furthermore, the direction of the first eye movement largely does not have any effect on the impairment of the second saccade. I will argue that this study provides important evidence that the extraretinal signal used for saccadic localization is eye position rather than a displacement vector.
In chapter 5, I will demonstrate that in parietal monkeys the eye drifts toward the lesion side at the end of the memory saccade in darkness. This result suggests that the eye position activity in the posterior parietal cortex is active in nature and subserves gaze holding.
Overall, these results further support the view that area LIP neurons encode spatial locations in a craniotopic framework and is involved in processing voluntary eye movements.
Resumo:
The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented. The variables are used in the analysis of the first proton-proton collisions dataset at CMS (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results. A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011. The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content. The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.
With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.
Resumo:
Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.
In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.
We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.
We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.
In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.
Finally, we discuss prospects for related future investigations.
Resumo:
In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.
For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.
To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.
The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.
The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.
Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.
We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.
Resumo:
This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.
The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.
Resumo:
Cancellation of interfering frequency-modulated (FM) signals is investigated with emphasis towards applications on the cellular telephone channel as an important example of a multiple access communications system. In order to fairly evaluate analog FM multiaccess systems with respect to more complex digital multiaccess systems, a serious attempt to mitigate interference in the FM systems must be made. Information-theoretic results in the field of interference channels are shown to motivate the estimation and subtraction of undesired interfering signals. This thesis briefly examines the relative optimality of the current FM techniques in known interference channels, before pursuing the estimation and subtracting of interfering FM signals.
The capture-effect phenomenon of FM reception is exploited to produce simple interference-cancelling receivers with a cross-coupled topology. The use of phase-locked loop receivers cross-coupled with amplitude-tracking loops to estimate the FM signals is explored. The theory and function of these cross-coupled phase-locked loop (CCPLL) interference cancellers are examined. New interference cancellers inspired by optimal estimation and the CCPLL topology are developed, resulting in simpler receivers than those in prior art. Signal acquisition and capture effects in these complex dynamical systems are explained using the relationship of the dynamical systems to adaptive noise cancellers.
FM interference-cancelling receivers are considered for increasing the frequency reuse in a cellular telephone system. Interference mitigation in the cellular environment is seen to require tracking of the desired signal during time intervals when it is not the strongest signal present. Use of interference cancelling in conjunction with dynamic frequency-allocation algorithms is viewed as a way of improving spectrum efficiency. Performance of interference cancellers indicates possibilities for greatly increased frequency reuse. The economics of receiver improvements in the cellular system is considered, including both the mobile subscriber equipment and the provider's tower (base station) equipment.
The thesis is divided into four major parts and a summary: the introduction, motivations for the use of interference cancellation, examination of the CCPLL interference canceller, and applications to the cellular channel. The parts are dependent on each other and are meant to be read as a whole.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Experimental demonstrations and theoretical analyses of a new electromechanical energy conversion process which is made feasible only by the unique properties of superconductors are presented in this dissertation. This energy conversion process is characterized by a highly efficient direct energy transformation from microwave energy into mechanical energy or vice versa and can be achieved at high power level. It is an application of a well established physical principle known as the adiabatic theorem (Boltzmann-Ehrenfest theorem) and in this case time dependent superconducting boundaries provide the necessary interface between the microwave energy on one hand and the mechanical work on the other. The mechanism which brings about the conversion is another known phenomenon - the Doppler effect. The resonant frequency of a superconducting resonator undergoes continuous infinitesimal shifts when the resonator boundaries are adiabatically changed in time by an external mechanical mechanism. These small frequency shifts can accumulate coherently over an extended period of time to produce a macroscopic shift when the resonator remains resonantly excited throughout this process. In addition, the electromagnetic energy in s ide the resonator which is proportional to the oscillation frequency is al so accordingly changed so that a direct conversion between electromagnetic and mechanical energies takes place. The intrinsically high efficiency of this process is due to the electromechanical interactions involved in the conversion rather than a process of thermodynamic nature and therefore is not limited by the thermodynamic value.
A highly reentrant superconducting resonator resonating in the range of 90 to 160 MHz was used for demonstrating this new conversion technique. The resonant frequency was mechanically modulated at a rate of two kilohertz. Experimental results showed that the time evolution of the electromagnetic energy inside this frequency modulated (FM) superconducting resonator indeed behaved as predicted and thus demonstrated the unique features of this process. A proposed usage of FM superconducting resonators as electromechanical energy conversion devices is given along with some practical design considerations. This device seems to be very promising in producing high power (~10W/cm^3) microwave energy at 10 - 30 GHz.
Weakly coupled FM resonator system is also analytically studied for its potential applications. This system shows an interesting switching characteristic with which the spatial distribution of microwave energies can be manipulated by external means. It was found that if the modulation was properly applied, a high degree (>95%) of unidirectional energy transfer from one resonator to the other could be accomplished. Applications of this characteristic to fabricate high efficiency energy switching devices and high power microwave pulse generators are also found feasible with present superconducting technology.
Resumo:
Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.
Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.
It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.
A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.
Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.