939 resultados para Isotropic convex regions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: Monthly estimates of the abundance of yellowfin tuna by age groups and regions within the eastern Pacific Ocean during 1970-1988 are made, using purse-seine catch rates, length-frequency samples, and results from cohort analysis. The numbers of individuals caught of each age group in each logged purse-seine set are estimated, using the tonnage from that set and length-frequency distribution from the "nearest" length-frequency sample(s). Nearest refers to the closest length frequency sample(s) to the purse-seine set in time, distance, and set type (dolphin associated, floating object associated, skipjack associated, none of these, and some combinations). Catch rates are initially calculated as the estimated number of individuals of the age group caught per hour of searching. Then, to remove the effects of set type and vessel speed, they are standardized, using separate weiznted generalized linear models for each age group. The standardized catch rates at the center of each 2.5 0 quadrangle-month are estimated, using locally-weighted least-squares regressions on latitude, longitude and date, and then combined into larger regions. Catch rates within these regions are converted to numbers of yellowfin, using the mean age composition from cohort analysis. The variances of the abundance estimates within regions are large for 0-, 1-, and 5-year-olds, but small for 1.5- to 4-year-olds, except during periods of low fishing activity. Mean annual catch rate estimates for the entire eastern Pacific Ocean are significantly positively correlated with mean abundance estimates from cohort analysis for age groups ranging from 1.5 to 4 years old. Catch-rate indices of abundance by age are expected to be useful in conjunction with data on reproductive biology to estimate total egg production within regions. The estimates may also be useful in understanding geographic and temporal variations in age-specific availability to purse seiners, as well as age-specific movements. SPANISH: Se calculan estimaciones mensuales de la abundancia del atún aleta amarilla por grupos de edad y regiones en el Océano Pacífico oriental durante 1970-1988, usando tasas de captura cerquera, muestras de frecuencia de talla, y los resultados del análisis de cohortes. Se estima el número de individuos capturados de cada grupo de edad en cada lance cerquero registrado, usando el tonelaje del lance en cuestión y la distribución de frecuencia de talla de la(s) muestra(s) de frecuencia de talla "más cercana/s)," "Más cercana" significa la(s) muestra(s) de frecuencia de talla más parecida(s) al lance cerquero en cuanto a fecha, distancia, y tipo de lance (asociado con delfines, con objeto flotante, con barrilete, con ninguno de éstos, y algunas combinaciones). Se calculan inicialmente las tasas de captura como el número estimado de individuos del grupo de edad capturado por hora de búsqueda. A continuación, para eliminar los efectos del tipo de lance y la velocidad del barco, se estandardizan dichas tasas, usando un modelo lineal generalizado ponderado, para cada grupo por separado. Se estima la tasa de captura estandardizada al centro de cada cuadrángulo de 2.5°-mes, usando regresiones de mínimos cuadrados ponderados localmente por latitud, longitud, y fecha, y entonces combinándolas en regiones mayores. Se convierten las tasas de captura dentro de estas regiones en números de aletas amarillas individuales, usando el número promedio por edad proveniente del análisis de cohortes. Las varianzas de las estimaciones de la abundancia dentro de las regiones son grandes para los peces de O, 1, Y5 años de edad, pero pequeñas para aquellos de entre 1.5 Y4 años de edad, excepto durante períodos de poca actividad pesquera. Las estimaciones de la tasa de captura media anual para todo el Océano Pacífico oriental están correlacionadas positivamente de forma significativa con las estimaciones de la abundancia media del análisis de las cohortes para los grupos de edad de entre 1.5 y 4 años. Se espera que los índices de abundancia por edad basados en las tasas de captura sean útiles, en conjunto con datos de la biología reproductiva, para estimar la producción total de huevos por regiones. Las estimaciones podrían asimismo ser útiles para la comprensión de las variaciones geográficas y temporales de la disponibilidad específica por edad a los barcos cerqueros, y también las migraciones específicas por edad. (PDF contains 35 pages.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change has rapidly emerged as a significant threat to coastal areas around the world. While uncertainty regarding distribution, intensity, and timescale inhibits our ability to accurately forecast potential impacts, it is widely accepted that changes in global climate will result in a variety of significant environmental, social, and economic impacts. Coastal areas are particularly vulnerable to the effects of climate change and the implications of sea-level rise, and coastal communities must develop the capacity to adapt to climate change in order to protect people, property, and the environment along our nation’s coasts. The U.S. coastal zone is highly complex and variable, consisting of several regions that are characterized by unique geographic, economic, social and environmental factors. The degree of risk and vulnerability associated with climate change can vary greatly depending on the exposure and sensitivity of coastal resources within a given area. The ability of coastal communities to effectively adapt to climate change will depend greatly on their ability to develop and implement feasible strategies that address unique local and regional factors. A wide variety of resources are available to assist coastal states in developing their approach to climate change adaptation. However, given the complex and variable nature of the U.S. coastline, it is unlikely that a single set of guidelines can adequately address the full range of adaptation needs at the local and regional levels. This panel seeks to address some of the unique local and regional issues facing coastal communities throughout the U.S. including anticipated physical, social, economic and environmental impacts, existing resources and guidelines for climate change adaptation, current approaches to climate change adaptation planning, and challenges and opportunities for developing adaptation strategies. (PDF contains 4 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incoherent subharmonic light scattering in isotropic media is a new kind of nonlinear light scattering, which involves single input photon and multiple output photons of equal frequency. We investigate theoretically the dependence of the subharmonic scattering intensity on the hyperpolarizability of molecules and the incident intensity using nonlinear optics theory similar to that used for Hyper-Rayleigh scattering and degenerate optical parametric oscillators. It is derived that the subharmonic scattering intensities grow exponentially or superexponentially with the hyperpolarizability of molecules and the incident intensity. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.

Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.

The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.

First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.

Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.

Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

4 p.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

28 p.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).