8 resultados para Inequality constraints
em CaltechTHESIS
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
In this thesis, we test the electroweak sector of the Standard Model of particle physics through the measurements of the cross section of the simultaneous production of the neutral weak boson Z and photon γ, and the limits on the anomalous Zγγ and ZZγ triple gauge couplings h3 and h4 with the Z decaying to leptons (electrons and muons). We analyze events collected in proton-proton collisions at center of mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 5.0 inverse femtobarn. The analyzed events were recorded by the Compact Muon Solenoid detector at the Large Hadron Collider in 2011.
The production cross section has been measured for hard photons with transverse momentum greater than 15 GeV that are separated from the the final state leptons in the eta-phi plane by Delta R greater than 0.7, whose sum of the transverse energy of hadrons over the transverse energy of the photon in a cone around the photon with Delta R less than 0.3 is less than 0.5, and with the invariant mass of the dilepton system greater than 50 GeV. The measured cross section value is 5.33 +/- 0.08 (stat.) +/- 0.25 (syst.) +/- 0.12 (lumi.) picobarn. This is compatible with the Standard Model prediction that includes next-to-leading-order QCD contributions: 5.45 +/- 0.27 picobarn.
The measured 95 % confidence-level upper limits on the absolute values of the anomalous couplings h3 and h4 are 0.01 and 8.8E-5 for the Zγγ interactions, and, 8.6E-3 and 8.0E-5 for the ZZγ interactions. These values are also compatible with the Standard Model where they vanish in the tree-level approximation. They extend the sensitivity of the 2012 results from the ATLAS collaboration based on 1.02 inverse femtobarn of data by a factor of 2.4 to 3.1.
Resumo:
Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.
Resumo:
One of the greatest challenges in science lies in disentangling causality in complex, coupled systems. This is illustrated no better than in the dynamic interplay between the Earth and life. The early evolution and diversification of animals occurred within a backdrop of global change, yet reconstructing the potential role of the environment in this evolutionary transition is challenging. In the 200 million years from the end-Cryogenian to the Ordovician, enigmatic Ediacaran fauna explored body plans, animals diversified and began to biomineralize, forever changing the ocean's chemical cycles, and the biological community in shallow marine ecosystems transitioned from a microbial one to an animal one.
In the following dissertation, a multi-faceted approach combining macro- and micro-scale analyses is presented that draws on the sedimentology, geochemistry and paleontology of the rocks that span this transition to better constrain the potential environmental changes during this interval.
In Chapter 1, the potential of clumped isotope thermometry in deep time is explored by assessing the importance of burial and diagenesis on the thermometer. Eocene- to Precambrian-aged carbonates from the Sultanate of Oman were analyzed from current burial depths of 350-5850 meters. Two end-member styles of diagenesis independent of burial depth were observed.
Chapters 2, 3 and 4 explore the fallibility of the Ediacaran carbon isotope record and aspects of the sedimentology and geochemistry of the rocks preserving the largest negative carbon isotope excursion on record---the Shuram Excursion. Chapter 2 documents the importance of temperature, fluid composition and mineralogy on the delta 18-O min record and interrogates the bulk trace metal signal. Chapter 3 explores the spatial variability in delta 13-C recorded in the transgressive Johnnie Oolite and finds a north-to-south trend recording the onset of the excursion. Chapter 4 investigates the nature of seafloor precipitation during this excursion and more broadly. We document the potential importance of microbial respiratory reactions on the carbonate chemistry of the sediment-water interface through time.
Chapter 5 investigates the latest Precambrian sedimentary record in carbonates from the Sultanate of Oman, including how delta 13-C and delta 34-S CAS vary across depositional and depth gradients. A new model for the correlation of the Buah and Ara formations across Oman is presented. Isotopic results indicate delta 13-C varies with relative eustatic change and delta 34-S CAS may vary in absolute magnitude across Oman.
Chapter 6 investigates the secular rise in delta 18-Omin in the early Paleozoic by using clumped isotope geochemistry on calcitic and phosphatic fossils from the Cambrian and Ordovician. Results do not indicate extreme delta 18-O seawater depletion and instead suggest warmer equatorial temperatures across the early Paleozoic.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
This thesis brings together four papers on optimal resource allocation under uncertainty with capacity constraints. The first is an extension of the Arrow-Debreu contingent claim model to a good subject to supply uncertainty for which delivery capacity has to be chosen before the uncertainty is resolved. The second compares an ex-ante contingent claims market to a dynamic market in which capacity is chosen ex-ante and output and consumption decisions are made ex-post. The third extends the analysis to a storable good subject to random supply. Finally, the fourth examines optimal allocation of water under an appropriative rights system.
Resumo:
Despite years of research on low-angle detachments, much about them remains enigmatic. This thesis addresses some of the uncertainty regarding two particular detachments, the Mormon Peak detachment in Nevada and the Heart Mountain detachment in Wyoming and Montana.
Constraints on the geometry and kinematics of emplacement of the Mormon Peak detachment are provided by detailed geologic mapping of the Meadow Valley Mountains, along with an analysis of structural data within the allochthon in the Mormon Mountains. Identifiable structures well suited to constrain the kinematics of the detachment include a newly mapped, Sevier-age monoclinal flexure in the hanging wall of the detachment. This flexure, including the syncline at its base and the anticline at its top, can be readily matched to the base and top of the frontal Sevier thrust ramp, which is exposed in the footwall of the detachment to the east in the Mormon Mountains and Tule Springs Hills. The ~12 km of offset of these structural markers precludes the radial sliding hypothesis for emplacement of the allochthon.
The role of fluids in the slip along faults is a widely investigated topic, but the use of carbonate clumped-isotope thermometry to investigate these fluids is new. Faults rocks from within ~1 m of the Mormon Peak detachment, including veins, breccias, gouges, and host rocks, were analyzed for carbon, oxygen, and clumped-isotope measurements. The data indicate that much of the carbonate breccia and gouge material along the detachment is comminuted host rock, as expected. Measurements in vein material indicate that the fluid system is dominated by meteoric water, whose temperature indicates circulation to substantial depths (c. 4 km) in the upper crust near the fault zone.
Slip along the subhorizontal Heart Mountain detachment is particularly enigmatic, and many different mechanisms for failure have been proposed, predominantly involving catastrophic failure. Textural evidence of multiple slip events is abundant, and include multiple brecciation events and cross-cutting clastic dikes. Footwall deformation is observed in numerous exposures of the detachment. Stylolitic surfaces and alteration textures within and around “banded grains” previously interpreted to be an indicator of high-temperature fluidization along the fault suggest their formation instead via low-temperature dissolution and alteration processes. There is abundant textural evidence of the significant role of fluids along the detachment via pressure solution. The process of pressure solution creep may be responsible for enabling multiple slip events on the low-angle detachment, via a local rotation of the stress field.
Clumped-isotope thermometry of fault rocks associated with the Heart Mountain detachment indicates that despite its location on the flanks of a volcano that was active during slip, the majority of carbonate along the Heart Mountain detachment does not record significant heating above ambient temperatures (c. 40-70°C). Instead, cold meteoric fluids infiltrated the detachment breccia, and carbonate precipitated under ambient temperatures controlled by structural depth. Locally, fault gouge does preserve hot temperatures (>200°C), as is observed in both the Mormon Peak detachment and Heart Mountain detachment areas. Samples with very hot temperatures attributable to frictional shear heating are present but rare. They appear to be best preserved in hanging wall structures related to the detachment, rather than along the main detachment.
Evidence is presented for the prevalence of relatively cold, meteoric fluids along both shallow crustal detachments studied, and for protracted histories of slip along both detachments. Frictional heating is evident from both areas, but is a minor component of the preserved fault rock record. Pressure solution is evident, and might play a role in initiating slip on the Heart Mountain fault, and possibly other low-angle detachments.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.