914 resultados para Projections onto convex sets
Resumo:
We analyze the von Neumann and Morgenstern stable sets for the mixed extension of 2 2 games when only single profitable deviations are allowed. We show that the games without a strict Nash equilibrium have a unique vN&M stable set and otherwise they have infinite sets.
Resumo:
In 1972, Maschler, Peleg and Shapley proved that in the class of convex the nucleolus and the kernel coincide. The only aim of this note is to provide a shorter, alternative proof of this result.
Resumo:
We prove that the SD-prenucleolus satisfies monotonicity in the class of convex games. The SD-prenucleolus is thus the only known continuous core concept that satisfies monotonicity for convex games. We also prove that for convex games the SD-prenucleolus and the SD-prekernel coincide.
Resumo:
The outputs from the pilot work with CIBT to develop scenario guide based on existing work across European business, adding an education and more specifically IT perspective to generic scenarios.
Resumo:
Sets and catches of Atlantic menhaden, Brevoortia tyrannus, made in 1985-96 by purse-seine vessels from Virginia and North Carolina were studied by digitizing and analyzing Captain's Daily Fishing Reports (CDFR's), daily logs of fishing activities completed by captains of menhaden vessels. 33,674 CDFR's were processed, representing 125,858 purse-seine sets. On average, the fleet made 10,488 sets annually. Virginia vessels made at least one purse-seine set on 67%-83% of available fishing days between May and December. In most years, five was the median number of sets attempted each fishing day. Mean set duration ranged from 34 to 43 minutes, and median catch per set ranged from 15 to 30 metric tons (t). Spotter aircraft assisted in over 83% of sets overall. Average annual catch in Chesapeake Bay (149,500 t) surpassed all other fishing areas, and accounted for 52% of the fleet's catch. Annual catch from North Carolina waters (49,100 t) ranked a distant second. Fishing activity in ocean waters clustered off the Mid-Atlantic states in June-September, and off North Carolina in November-January. Delaware Bay and the New Jersey coast were important alternate fishing grounds during summer. Across all ocean fishing areas, most sets and catch occurred within 3 mi. of shore, but in Chesapeake Bay about half of all fishing activity occurred farther offshore. In Virginia, areas adjacent to fish factories tended to be heavily fished. Recent regulatory initiatives in various coastal states threaten the Atlantic menhaden fleet's access to traditional nearshore fishing grounds. (PDF file contains 26 pages.)
Resumo:
ENGLISH:Length-frequency samples of yellowfin tuna from 276 individual purse-seine sets were examined. Evidence of schooling by size is presented. Yellowfin schooled with skipjack are smaller and more homogeneous in length than are yellowfin from pure schools. Yellowfin in schools associated with porpoise appear to be more variable in size than yellowfin from other types of schools. No relationship was found between the tonnage of yellowfin in a school and the mean length of the yellowfin. Despite the tendency to school by size, the size variation within individual schools was judged to be enough to complicate greatly any program of regulation aimed at maximizing the yield-per-recruit through increasing the minimum size of yellowfin at first capture. SPANISH: Fueron examinadas las muestras frecuencia-longitud de atún aleta amarilla, de 276 lances individuales de redes de cerco. Se presenta la evidencia de agrupación por tamaños. Los atunes aleta amarilla agrupados con barrilete, son más pequeños y más homogéneos en longitud, que los atunes aleta amarilla de cardúmenes puros. El atún aleta amarilla en cardúmenes asociados con delfines parece ser más variable en tamaño, que el atún aleta amarilla proveniente de otros tipos de cardúmenes. No se encontró relac¡'ón entre el tonelaje del atún aleta amarilla en un cardumen y la longitud media de esta especie. A pesar de la tendencia a agruparse por tamaño, se juzgó, que la variación de tamaño en cardúmenes individuales, sería suficiente para complicar grandemente cualquier programa de reglamentación, dirigido a obtener el máximo del rendimiento por recluta a través del incremento del tamaño mínimo del atún aleta amarilla en la primera captura.
Correction of probe pressure artifacts in freehand 3D ultrasound - further results and convex probes
Resumo:
Nanostructured FeNi-based multilayers are very suitable for use as magnetic sensors using the giant magneto-impedance effect. New fields of application can be opened with these materials deposited onto flexible substrates. In this work, we compare the performance of samples prepared onto a rigid glass substrate and onto a cyclo olefin copolymer flexible one. Although a significant reduction of the field sensitivity is found due to the increased effect of the stresses generated during preparation, the results are still satisfactory for use as magnetic field sensors in special applications. Moreover, we take advantage of the flexible nature of the substrate to evaluate the pressure dependence of the giant magneto-impedance effect. Sensitivities up to 1 Omega/Pa are found for pressures in the range of 0 to 1 Pa, demostrating the suitability of these nanostructured materials deposited onto flexible substrates to build sensitive pressure sensors.
Resumo:
ENGLISH: Annual estimates of the number of purse-seine sets made on tunas associated with dolphins are needed to estimate the total number of dolphins killed incidentally by the eastern Pacific tuna fishery. The most complete source of data, the Inter-American Tropical Tuna Commission's logbook data base, was used in this study. In the logbook data base, most sets are identified as being either associated with dolphins or not associated with dolphins. Some sets are not identified in this respect. However, the number of these unidentified sets which were associated with dolphins have been estimated by stratifying the logbook data according to whether or not any tuna were caught, whether or not the nearest identified set was associated with dolphins, and the distance to the nearest identified set. Most of the unidentified sets fell in strata characterized by a proportion of sets on tuna associated with dolphins that was lower than the overall unstratified proportion. Landings data were used to estimate the number of sets on tunas associated with dolphins from fishing trips not included in the logbook data base. SPANISH: Se necesitan las estimaciones anuales de la cantidad de lances realizados sobre atunes asociados con delfines para calcular todo el número de delfines muertos accidentalmente en la pesca atunera del Pacífico oriental. Se empleó en este estudio la fuente más completa-los datos de la Comisión Interamericana del Atún Tropical, proveniente de los cuadernos de bitácora. En éstos, la mayoría de los lances han sido identificados ya sea como asociados o no asociados con delfines. Algunos de los lances no han sido identificados a este respecto. Sin embargo, se ha estimado el número de estos lances asociados con delfines que no se habían identificado, al estratificar los datos de bitácora de acuerdo a si se había o no capturado atún, a si el lance identificado más próximo era o no un lance asociado con delfines y al averiguar la distancia del lance identificado más cercano. La mayoría de los lances sin identificar se colocan en los estratos caracterizados por una proporción de lances sobre atunes asociados con delfines, inferior a la proporción general sin estratificar. Se usaron los datos de los desembarques para calcular la cantidad de lances sobre atunes asociados con delfines en viajes pesqueros que no fueron incluídos en la base de los datos de bitácora. (PDF contains 73 pages.)
Resumo:
Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.
Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.
The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.