899 resultados para projection onto convex sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: Annual estimates of the number of purse-seine sets made on tunas associated with dolphins are needed to estimate the total number of dolphins killed incidentally by the eastern Pacific tuna fishery. The most complete source of data, the Inter-American Tropical Tuna Commission's logbook data base, was used in this study. In the logbook data base, most sets are identified as being either associated with dolphins or not associated with dolphins. Some sets are not identified in this respect. However, the number of these unidentified sets which were associated with dolphins have been estimated by stratifying the logbook data according to whether or not any tuna were caught, whether or not the nearest identified set was associated with dolphins, and the distance to the nearest identified set. Most of the unidentified sets fell in strata characterized by a proportion of sets on tuna associated with dolphins that was lower than the overall unstratified proportion. Landings data were used to estimate the number of sets on tunas associated with dolphins from fishing trips not included in the logbook data base. SPANISH: Se necesitan las estimaciones anuales de la cantidad de lances realizados sobre atunes asociados con delfines para calcular todo el número de delfines muertos accidentalmente en la pesca atunera del Pacífico oriental. Se empleó en este estudio la fuente más completa-los datos de la Comisión Interamericana del Atún Tropical, proveniente de los cuadernos de bitácora. En éstos, la mayoría de los lances han sido identificados ya sea como asociados o no asociados con delfines. Algunos de los lances no han sido identificados a este respecto. Sin embargo, se ha estimado el número de estos lances asociados con delfines que no se habían identificado, al estratificar los datos de bitácora de acuerdo a si se había o no capturado atún, a si el lance identificado más próximo era o no un lance asociado con delfines y al averiguar la distancia del lance identificado más cercano. La mayoría de los lances sin identificar se colocan en los estratos caracterizados por una proporción de lances sobre atunes asociados con delfines, inferior a la proporción general sin estratificar. Se usaron los datos de los desembarques para calcular la cantidad de lances sobre atunes asociados con delfines en viajes pesqueros que no fueron incluídos en la base de los datos de bitácora. (PDF contains 73 pages.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(PDF contains 50 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.

Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.

The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the design, construction and performance of a high-pressure, xenon, gas time projection chamber (TPC) for the study of double beta decay in ^(136) Xe. The TPC when operating at 5 atm can accommodate 28 moles of 60% enriched ^(136) Xe. The TPC has operated as a detector at Caltech since 1986. It is capable of reconstructing a charged particle trajectory and can easily distinguish between different kinds of charged particles. A gas purification and xenon gas recovery system were developed. The electronics for the 338 channels of readout was developed along with a data acquistion system. Currently, the detector is being prepared at the University of Neuchatel for installation in the low background laboratory situated in the St. Gotthard tunnel, Switzerland. In one year of runtime the detector should be sensitive to a 0ν lifetime of the order of 10^(24) y, which corresponds to a neutrino mass in the range 0.3 to 3.3 eV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.

First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.

Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.

Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The simplest multiplicative systems in which arithmetical ideas can be defined are semigroups. For such systems irreducible (prime) elements can be introduced and conditions under which the fundamental theorem of arithmetic holds have been investigated (Clifford (3)). After identifying associates, the elements of the semigroup form a partially ordered set with respect to the ordinary division relation. This suggests the possibility of an analogous arithmetical result for abstract partially ordered sets. Although nothing corresponding to product exists in a partially ordered set, there is a notion similar to g.c.d. This is the meet operation, defined as greatest lower bound. Thus irreducible elements, namely those elements not expressible as meets of proper divisors can be introduced. The assumption of the ascending chain condition then implies that each element is representable as a reduced meet of irreducibles. The central problem of this thesis is to determine conditions on the structure of the partially ordered set in order that each element have a unique such representation.

Part I contains preliminary results and introduces the principal tools of the investigation. In the second part, basic properties of the lattice of ideals and the connection between its structure and the irreducible decompositions of elements are developed. The proofs of these results are identical with the corresponding ones for the lattice case (Dilworth (2)). The last part contains those results whose proofs are peculiar to partially ordered sets and also contains the proof of the main theorem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The SCF ubiquitin ligase complex of budding yeast triggers DNA replication by cata lyzi ng ubiquitination of the S phase CDK inhibitor SIC1. SCF is composed of several evolutionarily conserved proteins, including ySKP1, CDC53 (Cullin), and the F-box protein CDC4. We isolated hSKP1 in a two-hybrid screen with hCUL1, the human homologue of CDC53. We showed that hCUL1 associates with hSKP1 in vivo and directly interacts with hSKP1 and the human F-box protein SKP2 in vitro, forming an SCF-Iike particle. Moreover, hCUL1 complements the growth defect of yeast CDC53^(ts) mutants, associates with ubiquitination-promoting activity in human cell extracts, and can assemble into functional, chimeric ubiquitin ligase complexes with yeast SCF components. These data demonstrated that hCUL1 functions as part of an SCF ubiquitin ligase complex in human cells. However, purified human SCF complexes consisting of CUL1, SKP1, and SKP2 are inactive in vitro, suggesting that additional factors are required.

Subsequently, mammalian SCF ubiquitin ligases were shown to regulate various physiological processes by targeting important cellular regulators, like lĸBα, β-catenin, and p27, for ubiquitin-dependent proteolysis by the 26S proteasome. Little, however, is known about the regulation of various SCF complexes. By using sequential immunoaffinity purification and mass spectrometry, we identified proteins that interact with human SCF components SKP2 and CUL1 in vivo. Among them we identified two additional SCF subunits: HRT1, present in all SCF complexes, and CKS1, that binds to SKP2 and is likely to be a subunit of SCF5^(SKP2) complexes. Subsequent work by others demonstrated that these proteins are essential for SCF activity. We also discovered that COP9 Signalosome (CSN), previously described in plants as a suppressor of photomorphogenesis, associates with CUL1 and other SCF subunits in vivo. This interaction is evolutionarily conserved and is also observed with other Cullins, suggesting that all Cullin based ubiquitin ligases are regulated by CSN. CSN regulates Cullin Neddylation presumably through CSNS/JAB1, a stochiometric Signalosome subunit and a putative deneddylating enzyme. This work sheds light onto an intricate connection that exists between signal transduction pathways and protein degradation machinery inside the cell and sets stage for gaining further insights into regulation of protein degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assembling a nervous system requires exquisite specificity in the construction of neuronal connectivity. One method by which such specificity is implemented is the presence of chemical cues within the tissues, differentiating one region from another, and the presence of receptors for those cues on the surface of neurons and their axons that are navigating within this cellular environment.

Connections from one part of the nervous system to another often take the form of a topographic mapping. One widely studied model system that involves such a mapping is the vertebrate retinotectal projection-the set of connections between the eye and the optic tectum of the midbrain, which is the primary visual center in non-mammals and is homologous to the superior colliculus in mammals. In this projection the two-dimensional surface of the retina is mapped smoothly onto the two-dimensional surface of the tectum, such that light from neighboring points in visual space excites neighboring cells in the brain. This mapping is implemented at least in part via differential chemical cues in different regions of the tectum.

The Eph family of receptor tyrosine kinases and their cell-surface ligands, the ephrins, have been implicated in a wide variety of processes, generally involving cellular movement in response to extracellular cues. In particular, they possess expression patterns-i.e., complementary gradients of receptor in retina and ligand in tectum- and in vitro and in vivo activities and phenotypes-i.e., repulsive guidance of axons and defective mapping in mutants, respectively-consistent with the long-sought retinotectal chemical mapping cues.

The tadpole of Xenopus laevis, the South African clawed frog, is advantageous for in vivo retinotectal studies because of its transparency and manipulability. However, neither the expression patterns nor the retinotectal roles of these proteins have been well characterized in this system. We report here comprehensive descriptions in swimming stage tadpoles of the messenger RNA expression patterns of eleven known Xenopus Eph and ephrin genes, including xephrin-A3, which is novel, and xEphB2, whose expression pattern has not previously been published in detail. We also report the results of in vivo protein injection perturbation studies on Xenopus retinotectal topography, which were negative, and of in vitro axonal guidance assays, which suggest a previously unrecognized attractive activity of ephrins at low concentrations on retinal ganglion cell axons. This raises the possibility that these axons find their correct targets in part by seeking out a preferred concentration of ligands appropriate to their individual receptor expression levels, rather than by being repelled to greater or lesser degrees by the ephrins but attracted by some as-yet-unknown cue(s).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.