25 resultados para STATIONARY SECTORIAL SAMPLER

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An approximate approach is presented for determining the stationary random response of a general multidegree-of-freedom nonlinear system under stationary Gaussian excitation. This approach relies on defining an equivalent linear system for the nonlinear system. Two particular systems which possess exact solutions have been solved by this approach, and it is concluded that this approach can generate reasonable solutions even for systems with fairly large nonlinearities. The approximate approach has also been applied to two examples for which no exact or approximate solutions were previously available.

Also presented is a matrix algebra approach for determining the stationary random response of a general multidegree-of-freedom linear system. Its derivation involves only matrix algebra and some properties of the instantaneous correlation matricies of a stationary process. It is therefore very direct and straightforward. The application of this matrix algebra approach is in general simpler than that of commonly used approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let {Ƶn}n = -∞ be a stochastic process with state space S1 = {0, 1, …, D – 1}. Such a process is called a chain of infinite order. The transitions of the chain are described by the functions

Qi(i(0)) = Ƥ(Ƶn = i | Ƶn - 1 = i (0)1, Ƶn - 2 = i (0)2, …) (i ɛ S1), where i(0) = (i(0)1, i(0)2, …) ranges over infinite sequences from S1. If i(n) = (i(n)1, i(n)2, …) for n = 1, 2,…, then i(n) → i(0) means that for each k, i(n)k = i(0)k for all n sufficiently large.

Given functions Qi(i(0)) such that

(i) 0 ≤ Qi(i(0) ≤ ξ ˂ 1

(ii)D – 1/Ʃ/i = 0 Qi(i(0)) Ξ 1

(iii) Qi(i(n)) → Qi(i(0)) whenever i(n) → i(0),

we prove the existence of a stationary chain of infinite order {Ƶn} whose transitions are given by

Ƥ (Ƶn = i | Ƶn - 1, Ƶn - 2, …) = Qin - 1, Ƶn - 2, …)

With probability 1. The method also yields stationary chains {Ƶn} for which (iii) does not hold but whose transition probabilities are, in a sense, “locally Markovian.” These and similar results extend a paper by T.E. Harris [Pac. J. Math., 5 (1955), 707-724].

Included is a new proof of the existence and uniqueness of a stationary absolute distribution for an Nth order Markov chain in which all transitions are possible. This proof allows us to achieve our main results without the use of limit theorem techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis advances our physical understanding of the sensitivity of the hydrological cycle to global warming. Specifically, it focuses on changes in the longitudinal (zonal) variation of precipitation minus evaporation (P - E), which is predominantly controlled by planetary-scale stationary eddies. By studying idealized general circulation model (GCM) experiments with zonally varying boundary conditions, this thesis examines the mechanisms controlling the strength of stationary-eddy circulations and their role in the hydrological cycle. The overarching goal of this research is to understand the cause of changes in regional P - E with global warming. An understanding of such changes can be useful for impact studies focusing on water availability, ecosystem management, and flood risk.

Based on a moisture-budget analysis of ERA-Interim data, we establish an approximation for zonally anomalous P - E in terms of surface moisture content and stationary-eddy vertical motion in the lower troposphere. Part of the success of this approximation comes from our finding that transient-eddy moisture fluxes partially cancel the effect of stationary-eddy moisture advection, allowing divergent circulations to dominate the moisture budget. The lower-tropospheric vertical motion is related to horizontal motion in stationary eddies by Sverdrup and Ekman balance. These moisture- and vorticity-budget balances also hold in idealized and comprehensive GCM simulations across a range of climates.

By examining climate changes in the idealized and comprehensive GCM simulations, we are able to show the utility of the vertical motion P - E approximation for splitting changes in zonally anomalous P - E into thermodynamic and dynamic components. Shifts in divergent stationary-eddy circulations dominate changes in zonally anomalous P - E. This limits the local utility of the "wet gets wetter, dry gets drier” idea, where existing P - E patterns are amplified with warming by the increase in atmospheric moisture content, with atmospheric circulations held fixed. The increase in atmospheric moisture content manifests instead in an increase in the amplitude of the zonally anomalous hydrological cycle as measured by the zonal variance of P - E. However, dynamic changes, particularly the slowdown of divergent stationary-eddy circulations, limit the strengthening of the zonally anomalous hydrological cycle. In certain idealized cases, dynamic changes are even strong enough to reverse the tendency towards "wet gets wetter, dry gets drier” with warming.

Motivated by the importance of stationary-eddy vertical velocities in the moisture budget analysis, we examine controls on the amplitude of stationary eddies across a wide range of climates in an idealized GCM with simple topographic and ocean-heating zonal asymmetries. An analysis of the thermodynamic equation in the vicinity of topographic forcing reveals the importance of on-slope surface winds, the midlatitude isentropic slope, and latent heating in setting the amplitude of stationary waves. The response of stationary eddies to climate change is determined primarily by the strength of zonal surface winds hitting the mountain. The sensitivity of stationary-eddies to this surface forcing increases with climate change as the slope of midlatitude isentropes decreases. However, latent heating also plays an important role in damping the stationary-eddy response, and this damping becomes stronger with warming as the atmospheric moisture content increases. We find that the response of tropical overturning circulations forced by ocean heat-flux convergence is described by changes in the vertical structure of moist static energy and deep convection. This is used to derive simple scalings for the Walker circulation strength that capture the monotonic decrease with warming found in our idealized simulations.

Through the work of this thesis, the advances made in understanding the amplitude of stationary-waves in a changing climate can be directly applied to better understand and predict changes in the zonally anomalous hydrological cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation consists of three parts. In Part I, it is shown that looping trajectories cannot exist in finite amplitude stationary hydromagnetic waves propagating across a magnetic field in a quasi-neutral cold collision-free plasma. In Part II, time-dependent solutions in series expansion are presented for the magnetic piston problem, which describes waves propagating into a quasi-neutral cold collision-free plasma, ensuing from magnetic disturbances on the boundary of the plasma. The expansion is equivalent to Picard's successive approximations. It is then shown that orbit crossings of plasma particles occur on the boundary for strong disturbances and inside the plasma for weak disturbances. In Part III, the existence of periodic waves propagating at an arbitrary angle to the magnetic field in a plasma is demonstrated by Stokes expansions in amplitude. Then stability analysis is made for such periodic waves with respect to side-band frequency disturbances. It is shown that waves of slow mode are unstable whereas waves of fast mode are stable if the frequency is below the cutoff frequency. The cutoff frequency depends on the propagation angle. For longitudinal propagation the cutoff frequency is equal to one-fourth of the electron's gyrofrequency. For transverse propagation the cutoff frequency is so high that waves of all frequencies are stable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.

Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.

Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.

Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.

The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and ITA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X^n*~X^((n+1))* are never socially ranked above X^((n+1))*. Limiting results of iterated application of the *-operator are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the theoretical solution and experimental verification of phase conjugation via nondegenerate four-wave mixing in resonant media. The theoretical work models the resonant medium as a two-level atomic system with the lower state of the system being the ground state of the atom. Working initially with an ensemble of stationary atoms, the density matrix equations are solved by third-order perturbation theory in the presence of the four applied electro-magnetic fields which are assumed to be nearly resonant with the atomic transition. Two of the applied fields are assumed to be non-depleted counterpropagating pump waves while the third wave is an incident signal wave. The fourth wave is the phase conjugate wave which is generated by the interaction of the three previous waves with the nonlinear medium. The solution of the density matrix equations gives the local polarization of the atom. The polarization is used in Maxwell's equations as a source term to solve for the propagation and generation of the signal wave and phase conjugate wave through the nonlinear medium. Studying the dependence of the phase conjugate signal on the various parameters such as frequency, we show how an ultrahigh-Q isotropically sensitive optical filter can be constructed using the phase conjugation process.

In many cases the pump waves may saturate the resonant medium so we also present another solution to the density matrix equations which is correct to all orders in the amplitude of the pump waves since the third-order solution is correct only to first-order in each of the field amplitudes. In the saturated regime, we predict several new phenomena associated with degenerate four-wave mixing and also describe the ac Stark effect and how it modifies the frequency response of the filtering process. We also show how a narrow bandwidth optical filter with an efficiency greater than unity can be constructed.

In many atomic systems the atoms are moving at significant velocities such that the Doppler linewidth of the system is larger than the homogeneous linewidth. The latter linewidth dominates the response of the ensemble of stationary atoms. To better understand this case the density matrix equations are solved to third-order by perturbation theory for an atom of velocity v. The solution for the polarization is then integrated over the velocity distribution of the macroscopic system which is assumed to be a gaussian distribution of velocities since that is an excellent model of many real systems. Using the Doppler broadened system, we explain how a tunable optical filter can be constructed whose bandwidth is limited by the homogeneous linewidth of the atom while the tuning range of the filter extends over the entire Doppler profile.

Since it is a resonant system, sodium vapor is used as the nonlinear medium in our experiments. The relevant properties of sodium are discussed in great detail. In particular, the wavefunctions of the 3S and 3P states are analyzed and a discussion of how the 3S-3P transition models a two-level system is given.

Using sodium as the nonlinear medium we demonstrate an ultrahigh-Q optical filter using phase conjugation via nondegenerate four-wave mixing as the filtering process. The filter has a FWHM bandwidth of 41 MHz and a maximum efficiency of 4 x 10-3. However, our theoretical work and other experimental work with sodium suggest that an efficient filter with both gain and a narrower bandwidth should be quite feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).