16 resultados para GLOBALLY HYPERBOLIC SPACETIMES

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.

We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.

Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.

For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freshwater fish of the genus Apteronotus (family Gymnotidae) generate a weak, high frequency electric field (< 100 mV/cm, 0.5-10 kHz) which permeates their local environment. These nocturnal fish are acutely sensitive to perturbations in their electric field caused by other electric fish, and nearby objects whose impedance is different from the surrounding water. This thesis presents high temporal and spatial resolution maps of the electric potential and field on and near Apteronotus. The fish's electric field is a complicated and highly stable function of space and time. Its characteristics, such as spectral composition, timing, and rate of attenuation, are examined in terms of physical constraints, and their possible functional roles in electroreception.

Temporal jitter of the periodic field is less than 1 µsec. However, electrocyte activity is not globally synchronous along the fish 's electric organ. The propagation of electrocyte activation down the fish's body produces a rotation of the electric field vector in the caudal part of the fish. This may assist the fish in identifying nonsymmetrical objects, and could also confuse electrosensory predators that try to locate Apteronotus by following its fieldlines. The propagation also results in a complex spatiotemporal pattern of the EOD potential near the fish. Visualizing the potential on the same and different fish over timescales of several months suggests that it is stable and could serve as a unique signature for individual fish.

Measurements of the electric field were used to calculate the effects of simple objects on the fish's electric field. The shape of the perturbation or "electric image" on the fish's skin is relatively independent of a simple object's size, conductivity, and rostrocaudal location, and therefore could unambiguously determine object distance. The range of electrolocation may depend on both the size of objects and their rostrocaudal location. Only objects with very large dielectric constants cause appreciable phase shifts, and these are strongly dependent on the water conductivity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).

(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.

(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.

(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.

The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the dynamical, nonlinear interaction of colliding gravitational waves, as described by classical general relativity. It is focused mainly on two fundamental questions: First, what is the general structure of the singularities and Killing-Cauchy horizons produced in the collisions of exactly plane-symmetric gravitational waves? Second, under what conditions will the collisions of almost-plane gravitational waves (waves with large but finite transverse sizes) produce singularities?

In the work on the collisions of exactly-plane waves, it is shown that Killing horizons in any plane-symmetric spacetime are unstable against small plane-symmetric perturbations. It is thus concluded that the Killing-Cauchy horizons produced by the collisions of some exactly plane gravitational waves are nongeneric, and that generic initial data for the colliding plane waves always produce "pure" spacetime singularities without such horizons. This conclusion is later proved rigorously (using the full nonlinear theory rather than perturbation theory), in connection with an analysis of the asymptotic singularity structure of a general colliding plane-wave spacetime. This analysis also proves that asymptotically the singularities created by colliding plane waves are of inhomogeneous-Kasner type; the asymptotic Kasner axes and exponents of these singularities in general depend on the spatial coordinate that runs tangentially to the singularity in the non-plane-symmetric direction.

In the work on collisions of almost-plane gravitational waves, first some general properties of single almost-plane gravitational-wave spacetimes are explored. It is shown that, by contrast with an exact plane wave, an almost-plane gravitational wave cannot have a propagation direction that is Killing; i.e., it must diffract and disperse as it propagates. It is also shown that an almost-plane wave cannot be precisely sandwiched between two null wavefronts; i.e., it must leave behind tails in the spacetime region through which it passes. Next, the occurrence of spacetime singularities in the collisions of almost-plane waves is investigated. It is proved that if two colliding, almost-plane gravitational waves are initially exactly plane-symmetric across a central region of sufficiently large but finite transverse dimensions, then their collision produces a spacetime singularity with the same local structure as in the exact-plane-wave collision. Finally, it is shown that a singularity still forms when the central regions are only approximately plane-symmetric initially. Stated more precisely, it is proved that if the colliding almost-plane waves are initially sufficiently close to being exactly plane-symmetric across a bounded central region of sufficiently large transverse dimensions, then their collision necessarily produces spacetime singularities. In this case, nothing is now known about the local and global structures of the singularities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the roles of microorganisms in environmental settings by linking phylogenetic identity to metabolic function is a key challenge in delineating their broad-scale impact and functional diversity throughout the biosphere. This work addresses and extends such questions in the context of marine methane seeps, which represent globally relevant conduits for an important greenhouse gas. Through the application and development of a range of culture-independent tools, novel habitats for methanotrophic microbial communities were identified, established settings were characterized in new ways, and potential past conditions amenable to methane-based metabolism were proposed. Biomass abundance and metabolic activity measures – both catabolic and anabolic – demonstrated that authigenic carbonates associated with seep environments retain methanotrophic activity, not only within high-flow seep settings but also in adjacent locations exhibiting no visual evidence of chemosynthetic communities. Across this newly extended habitat, microbial diversity surveys revealed archaeal assemblages that were shaped primarily by seepage activity level and bacterial assemblages influenced more substantially by physical substrate type. In order to reliably measure methane consumption rates in these and other methanotrophic settings, a novel method was developed that traces deuterium atoms from the methane substrate into aqueous medium and uses empirically established scaling factors linked to radiotracer rate techniques to arrive at absolute methane consumption values. Stable isotope probing metaproteomic investigations exposed an array of functional diversity both within and beyond methane oxidation- and sulfate reduction-linked metabolisms, identifying components of each proposed enzyme in both pathways. A core set of commonly occurring unannotated protein products was identified as promising targets for future biochemical investigation. Physicochemical and energetic principles governing anaerobic methane oxidation were incorporated into a reaction transport model that was applied to putative settings on ancient Mars. Many conditions enabled exergonic model reactions, marking the metabolism and its attendant biomarkers as potentially promising targets for future astrobiological investigations. This set of inter-related investigations targeting methane metabolism extends the known and potential habitat of methanotrophic microbial communities and provides a more detailed understanding of their activity and functional diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aims at enhancing our fundamental understanding of the East Asian summer monsoon (EASM), and mechanisms implicated in its climatology in present-day and warmer climates. We focus on the most prominent feature of the EASM, i.e., the so-called Meiyu-Baiu (MB), which is characterized by a well-defined, southwest to northeast elongated quasi-stationary rainfall band, spanning from eastern China to Japan and into the northwestern Pacific Ocean in June and July.

We begin with an observational study of the energetics of the MB front in present-day climate. Analyses of the moist static energy (MSE) budget of the MB front indicate that horizontal advection of moist enthalpy, primarily of dry enthalpy, sustains the front in a region of otherwise negative net energy input into the atmospheric column. A decomposition of the horizontal dry enthalpy advection into mean, transient, and stationary eddy fluxes identifies the longitudinal thermal gradient due to zonal asymmetries and the meridional stationary eddy velocity as the most influential factors determining the pattern of horizontal moist enthalpy advection. Numerical simulations in which the Tibetan Plateau (TP) is either retained or removed show that the TP influences the stationary enthalpy flux, and hence the MB front, primarily by changing the meridional stationary eddy velocity, with reinforced southerly wind on the northwestern flank of the north Pacific subtropical high (NPSH) over the MB region and northerly wind to its north. Changes in the longitudinal thermal gradient are mainly confined to the near downstream of the TP, with the resulting changes in zonal warm air advection having a lesser impact on the rainfall in the extended MB region.

Similar mechanisms are shown to be implicated in present climate simulations in the Couple Model Intercomparison Project - Phase 5 (CMIP5) models. We find that the spatial distribution of the EASM precipitation simulated by different models is highly correlated with the meridional stationary eddy velocity. The correlation becomes more robust when energy fluxes into the atmospheric column are considered, consistent with the observational analyses. The spread in the area-averaged rainfall amount can be partially explained by the spread in the simulated globally-averaged precipitation, with the rest primarily due to the lower-level meridional wind convergence. Clear relationships between precipitation and zonal and meridional eddy velocities are observed.

Finally, the response of the EASM to greenhouse gas forcing is investigated at different time scales in CMIP5 model simulations. The reduction of radiative cooling and the increase in continental surface temperature occur much more rapidly than changes in sea surface temperatures (SSTs). Without changes in SSTs, the rainfall in the monsoon region decreases (increases) over ocean (land) in most models. On longer time scales, as SSTs increase, rainfall changes are opposite. The total response to atmospheric CO^2 forcing and subsequent SST warming is a large (modest) increase in rainfall over ocean (land) in the EASM region. Dynamic changes, in spite of significant contributions from the thermodynamic component, play an important role in setting up the spatial pattern of precipitation changes. Rainfall anomalies over East China are a direct consequence of local land-sea contrast, while changes in the larger-scale oceanic rainfall band are closely associated with the displacement of the larger-scale NPSH. Numerical simulations show that topography and SST patterns play an important role in rainfall changes in the EASM region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of global optimization of M phase-incoherent signals in N complex dimensions is formulated. Then, by using the geometric approach of Landau and Slepian, conditions for optimality are established for N = 2 and the optimal signal sets are determined for M = 2, 3, 4, 6, and 12.

The method is the following: The signals are assumed to be equally probable and to have equal energy, and thus are represented by points ṡi, i = 1, 2, …, M, on the unit sphere S1 in CN. If Wik is the halfspace determined by ṡi and ṡk and containing ṡi, i.e. Wik = {ṙϵCN:| ≥ | ˂ṙ, ṡk˃|}, then the Ʀi = ∩/k≠i Wik, i = 1, 2, …, M, the maximum likelihood decision regions, partition S1. For additive complex Gaussian noise ṅ and a received signal ṙ = ṡie + ṅ, where ϴ is uniformly distributed over [0, 2π], the probability of correct decoding is PC = 1/πN ∞/ʃ/0 r2N-1e-(r2+1)U(r)dr, where U(r) = 1/M M/Ʃ/i=1 Ʀi ʃ/∩ S1 I0(2r | ˂ṡ, ṡi˃|)dσ(ṡ), and r = ǁṙǁ.

For N = 2, it is proved that U(r) ≤ ʃ/Cα I0(2r|˂ṡ, ṡi˃|)dσ(ṡ) – 2K/M. h(1/2K [Mσ(Cα)-σ(S1)]), where Cα = {ṡϵS1:|˂ṡ, ṡi˃| ≥ α}, K is the total number of boundaries of the net on S1 determined by the decision regions, and h is the strictly increasing strictly convex function of σ(Cα∩W), (where W is a halfspace not containing ṡi), given by h = ʃ/Cα∩W I0 (2r|˂ṡ, ṡi˃|)dσ(ṡ). Conditions for equality are established and these give rise to the globally optimal signal sets for M = 2, 3, 4, 6, and 12.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.