983 resultados para proof theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photoelectron angular distributions produced in above-threshold ionization (ATI) are analysed using a nonperturbative scattering theory. The numerical results are in good qualitative agreement with recent measurements. Our study shows that the origin of the jet-like structure arises from the inherent properties of the ATI process and not from the angular momentum of either the initial or the excited states of the atom.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The foundation of Habermas's argument, a leading critical theorist, lies in the unequal distribution of wealth across society. He states that in an advanced capitalist society, the possibility of a crisis has shifted from the economic and political spheres to the legitimation system. Legitimation crises increase the more government intervenes into the economy (market) and the "simultaneous political enfranchisement of almost the entire adult population" (Holub, 1991, p. 88). The reason for this increase is because policymakers in advanced capitalist democracies are caught between conflicting imperatives: they are expected to serve the interests of their nation as a whole, but they must prop up an economic system that benefits the wealthy at the expense of most workers and the environment. Habermas argues that the driving force in history is an expectation, built into the nature of language, that norms, laws, and institutions will serve the interests of the entire population and not just those of a special group. In his view, policy makers in capitalist societies are having to fend off this expectation by simultaneously correcting some of the inequities of the market, denying that they have control over people's economic circumstances, and defending the market as an equitable allocator of income. (deHaven-Smith, 1988, p. 14). Critical theory suggests that this contradiction will be reflected in Everglades policy by communicative narratives that suppress and conceal tensions between environmental and economic priorities. Habermas’ Legitimation Crisis states that political actors use various symbols, ideologies, narratives, and language to engage the public and avoid a legitimation crisis. These influences not only manipulate the general population into desiring what has been manufactured for them, but also leave them feeling unfulfilled and alienated. Also known as false reconciliation, the public's view of society as rational, and "conductive to human freedom and happiness" is altered to become deeply irrational and an obstacle to the desired freedom and happiness (Finlayson, 2005, p. 5). These obstacles and irrationalities give rise to potential crises in the society. Government's increasing involvement in Everglades under advanced capitalism leads to Habermas's four crises: economic/environmental, rationality, legitimation, and motivation. These crises are occurring simultaneously, work in conjunction with each other, and arise when a principle of organization is challenged by increased production needs (deHaven-Smith, 1988). Habermas states that governments use narratives in an attempt to rationalize, legitimize, obscure, and conceal its actions under advanced capitalism. Although there have been many narratives told throughout the history of the Everglades (such as the Everglades was a wilderness that was valued as a wasteland in its natural state), the most recent narrative, “Everglades Restoration”, is the focus of this paper.(PDF contains 4 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years coastal resource management has begun to stand as its own discipline. Its multidisciplinary nature gives it access to theory situated in each of the diverse fields which it may encompass, yet management practices often revert to the primary field of the manager. There is a lack of a common set of “coastal” theory from which managers can draw. Seven resource-related issues with which coastal area managers must contend include: coastal habitat conservation, traditional maritime communities and economies, strong development and use pressures, adaptation to sea level rise and climate change, landscape sustainability and resilience, coastal hazards, and emerging energy technologies. The complexity and range of human and environmental interactions at the coast suggest a strong need for a common body of coastal management theory which managers would do well to understand generally. Planning theory, which itself is a synthesis of concepts from multiple fields, contains ideas generally valuable to coastal management. Planning theory can not only provide an example of how to develop a multi- or transdisciplinary set of theory, but may also provide actual theoretical foundation for a coastal theory. In particular we discuss five concepts in the planning theory discourse and present their utility for coastal resource managers. These include “wicked” problems, ecological planning, the epistemology of knowledge communities, the role of the planner/ manager, and collaborative planning. While these theories are known and familiar to some professionals working at the coast, we argue that there is a need for broader understanding amongst the various specialists working in the increasingly identifiable field of coastal resource management. (PDF contains 4 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether “quantization commutes with reduction.” Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kähler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kähler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds.

In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kähler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or “admissible”, values of momentum.

We first propose a reduction procedure for the prequantum geometric structures that “covers” symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems.

We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces.

Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees with its usual decomposition by irreducible representations, and so proves that quantization and reduction do indeed commute in this context.

A significant omission from the proof is the construction of an inner product on the space of polarized sections, and a discussion of its behavior under reduction. In the concluding chapter of the thesis, we suggest some ideas for future work in this direction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.

The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three separate topics, each stimulated by experiments, are treated theoretically in this dessertation: isotopic effects of ozone, electron transfer at interfaces, and intramolecular directional electron transfer in a supramolecular system.

The strange mass-independent isotope effect for the enrichment of ozone, which has been a puzzle in the literature for some 20 years, and the equally puzzling unconventional strong mass-dependent effect of individual reaction rate constants are studied as different aspects of a symmetry-driven behavior. A statistical (RRKM-based) theory with a hindered-rotor transition state is used. The individual rate constant ratios of recombination reactions at low pressures are calculated using the theory involving (1) small deviation from the statistical density of states for symmetric isotopomers, and (2) weak collisions for deactivation of the vibrationally excited ozone molecules. The weak collision and partitioning among exit channels play major roles in producing the large unconventional isotope effect in "unscrambled" systems. The enrichment studies reflect instead the non-statistical effect in "scrambled" systems. The theoretical results of low-pressure ozone enrichments and individual rate constant ratios obtained from these calculations are consistent with the corresponding experimental results. The isotopic exchange rate constant for the reaction ^(16)O + ^(18)O ^(18)O→+ ^(16)O ^(18)O + ^(18)O provides information on the nature of a variationally determined hindered-rotor transition state using experimental data at 130 K and 300 K. Pressure effects on the recombination rate constant, on the individual rate constant ratios and on the enrichments are also investigated. The theoretical results are consistent with the experimental data. The temperature dependence of the enrichment and rate constant ratios is also discussed, and experimental tests are suggested. The desirability of a more accurate potential energy surface for ozone in the transition state region is also noted.

Electron transfer reactions at semiconductor /liquid interfaces are studied using a tight-binding model for the semiconductors. The slab method and a z-transform method are employed in obtaining the tight-binding electronic structures of semiconductors having surfaces. The maximum electron transfer rate constants at Si/viologen^(2-/+) and InP /Me_(2)Fc^(+/O) interfaces are computed using the tight-binding type calculations for the solid and the extended-Huckel for the coupling to the redox agent at the interface. These electron transfer reactions are also studied using a free electron model for the semiconductor and the redox molecule, where Bardeen's method is adapted to calculate the coupling matrix element between the molecular and semiconductor electronic states. The calculated results for maximum rate constant of the electron transfer from the semiconductor bulk states are compared with the experimentally measured values of Lewis and coworkers, and are in reasonable agreement, without adjusting parameters. In the case of InP /liquid interface, the unusual current vs applied potential behavior is additionally interpreted, in part, by the presence of surface states.

Photoinduced electron transfer reactions in small supramolecular systems, such as 4-aminonaphthalimide compounds, are interesting in that there are, in principle, two alternative pathways (directions) for the electron transfer. The electron transfer, however, is unidirectional, as deduced from pH-dependent fluorescence quenching studies on different compounds. The role of electronic coupling matrix element and the charges in protonation are considered to explain the directionality of the electron transfer and other various results. A related mechanism is proposed to interpret the fluorescence behavior of similar molecules as fluorescent sensors of metal ions.